Career @ Crisp Analytics

We have some open positions now.

Let's see if you can align your skills with our requirement and send us your resume at

Big Data Engineer

Job Requirement
• Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs
• Capable of processing large sets of structured, semi-structured and unstructured data
• Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review.
• Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modelling and data mining, machine learning and advanced data processing
• Optional - Visual communicator – ability to convert and present data in an easy comprehendible visualizations using tools like D3.js, Tableau
• To enjoy being challenged, solve complex problems on a daily basis
• Proficient in executing efficient and robust ETL workflows
• To be able to work in teams and collaborate with others to clarify requirements
• To be able to tune Hadoop solutions to improve performance and end-user experience
• To have strong co-ordination and project management skills to handle complex projects
• Engineering background

• Big Data Ecosystems: Storm, Kafka, Spark, Flink, LogStash, Elastic Search, Solr, Nifi, Zookeeper, Cassandra, Hadoop, Hive, Pig, Sqoop, Oozie, Flume
• Programming Languages: Java
• Scripting Languages: JavaScript,D3.js, Python and Bash, R (optional)
• Databases: NoSQL, SQL 
• Tools: IDE, Git, Maven
• Platforms: Linux/Unix
• Application Servers: Apache Tomcat, Node.js
• Desired Domain Experience : Credit Cards / Banking and Financial Services

Data Scientist

Job Requirement
• Strong primary expertise as data engineer or data scientist, with the ability to stretch beyond one’s core field of expertise.
• PhD or Master at least 2+ years *relevant experience* as a strong contributor on a data science team
• Relevant degree in Statistics, Math / Applied Math, Operations Research, Computer Science, Economics or Quantitative Finance
• Expertise in at least one analytics function: attribution, segmentation, response modeling, churn, propensity, customer LTV, supply chain / logistics, geospatial inference, recommender systems, causal inference, forecasting, pricing, NLP or image processing
• Proficiency in a core programming language, such as: Python, C/C++, Scala, Java, Ruby
• Proficiency in R/Python, particularly to prototype mathematical models
• Proficiency with SQL and/or NoSQL
• Proficiency with Tableau, R-Shiny /or other data visualization tools
• Ability to scope and define data sets needed for specific use cases and identifying data gaps
• Ability to translate scientific insights into product decisions and work streams
• Flexibility to handle directional changes and moving priorities to ensure project success
• Strong oral and written communication skills
• Strong client management skills – ability to lead and persuade, positive energy, relentless focus on business impact

Nice to have
• Additional domain knowledge and technical expertise a big plus • Experience with AWS or Azure cloud computing environments • Proficiency with a distributed computing platform (Hadoop, Spark, etc.)
• Experience querying and administering big data storage services (Redshift, Teradata, Aurora, DynamoDB, etc.)
• Experience with general software release cycles / shipping machine learning or predictive analytics models at scale