Hiring AVP-Data Science(R, Python, Big data, Scala, Apache Spark, Hadoop, HDFS, MLlib, Airflow, ETL )-3, 4-7Yrs, 15-20LPA for e-commerce Co in Delhi
Skills Required: R, Python, Big data, Scala, Apache Spark, Hadoop, HDFS, MLlib, Airflow, ETL
Job Code: 3UGoal101/AVPDSc3/47Y1520LD/30A19
If interested please share your updated CV, CTC, ECTC, Total Experience, Notice Period, Location to our WhatsApp No 96202-49496 or reply to this email without changing the subject line.
Location: Delhi No of openings: 3
Please make sure candidate should qualify checklist written below:
Ques: Does the candidate has excellent working knowledge in R and Python?
Ques: Is the candidate a full time B.tech from Tier 1/Tier 2 Institute?
Ques: How much experience does the candidate has in R, Python and big data technologies (Spark, Hadoop etc)?
Ques: Is the candidate working in any E- commerce company?
Client : A fast growing Fintech Startup which using big data to provide solutions to its clients
4.0 – 7.0 Year(s)
Annual Fixed CTC
Min : 15.0 Lacs Max: 20.0 Lacs
No of openings
Flipkart, Mobiwik, Ecom, Paytm, Swiggy, Oyo, etc
Working days 5
• Set the vision, create the roadmap, and maintain (and invest) in infrastructure-team-process.
• Set the culture and mission to attract the best team possible. Continuously refine the set of priorities for a team of Data Scientists and Data Engineers.
• Oversee the development of the technology stack that will enable data exploration and analysis including: data
architecture, tagging and operational processes, data taxonomy, and reporting.
• Work with all stakeholders (marketing, operations, merchandising, finance, product design, etc.) by gathering data from all business units, developing requirements, ascertaining priorities, and reporting progress.
• Build applications, both consumer-facing and internal, so that we can collect and analyse billions of real-time data points on our products, service, and customers – and instantaneously optimize customer experience or resource utilization.
• Manage reports, create dashboards, and visualize data to communicate the delivery of information to stakeholders.
Job Title: AVP-Data Science
Location: New Delhi
Total Positions: 2
BU/Function : Data Science
Reports To: Head-Data Science
Direct Reports: –
Industry: Financial Services / FinTech / AdTech / MarTech / Banking / IT & Technology
Education: BE/BTech & MBA from Premier Institute
Functional Area: IT Software, Technology, Data Science, Big Data, Cloud, Datawarehouse
Role: Individual Contributor
Employment Type: Permanent Job, Full Time
• Ensure all the three phases of ETL (extract, transform, load) execute in parallel and are managed seamlessly.
• Consider important KPIs and measurements including latency, concurrency, access pattern, queries, data scope, end users, and technologies employed.
• Building predictive models on and running real-time experiments against web-log scale data, natural language
processing and applications of deep learning
• Experience across both data science and data engineering and ability to develop best practice and discipline for the team
• Passion for data visualisation and effective communication with data
• Experience managing and developing data science teams (or at very least, coaching junior members of a team)
Technical Toolset Knowledge TensorFlow, SQL, R, big data technologies (Spark, Hadoop etc)
• Min 3+ years of expertise working on and managing analytics/data science teams with consumer-facing companies (ideally in the eCommerce and/or subscription space)
• Ability to both manage and recruit a team while still being hands-on.
• Fluency in R, Python, or Julia.
• Experience with relational databases / SQL.
• Experience using Dynamo, Cassandra, Hbase, or other non-relational DB.
• High skill in data visualization.
• Proven ability to set a vision of where we will be in 2-5 years and set in place the systems-level thinking to get there.
• General industry knowledge of how distributed database infrastructure has been the solution to handling some of the biggest data warehouses on the planet – i.e. the likes of Netflix, Google, Amazon, Facebook, LinkedIn, and Twitter.
• Solid understanding of the Data Scientist project lifecycle processes including: initiation, identification of data needs, methodology selection, proof of concept, release and version control, validation and experimentation, production releases, maintenance and iteration.
• Deep understanding how to extract data from homogeneous or heterogeneous data sources (ETL), and transform the data for storing it in the proper format or structure for the purposes of querying and analysis.
• Experience developing dashboards and key metrics to track the business and inform strategy.
• Comfort with ambiguity and constant change.
• A strong communication skill set to make sure your team understands the “why” behind what they are building as well as “how” they are going to measure to understand success.
Tagged as: airflow, apache spark, big data, ETL, hadoop, hdfs, mllib, python, R, scala