Data Engineer

getmega

GetMega

GetMega is one of the first skill based Real Money Gaming platform for the smartphone generation of the world. We are building a platform where players are pitted against each other to compete in various games requiring skills like Reaction, Attention, Spatial and Logic Reasoning, Visual and Auditory Processing etc. We haver games accross various categories including some of the well known cards and casual game. You should definitely checkout our app at www.getmega.com

GetMega is founded by two Computer Sciences Undergraduates from IIT Kanpur who have previously run and sold VC funded startup in the logistics domain. We are based out of Bengaluru, India and have raised multi-million dollar funding from Top-Tier Institutional Venture Capitalists to disrupt the multi-billion dollar skill based gaming market around the world.


Values of the A-Grade Data Engineer:

Product-Mindset - We believe in a mix of data-led and intuitive/Player Focus Group Discussion (FGD) based product decisions. You will have to work closely with our Players, Data and Design teams to understand their psyche, so we can create a more frictionless experience for Mega players.

Ownership & Delivery - We work in small cross-functional teams, making a large impact on the products we create from idea to launch.

Iterate Quickly - We have a bias to build MVPs and collect direct feedback from the users of our software for faster iteration.

As the first Data Engineer at Mega, you will have a rare opportunity to shape Mega’s data architecture, infrastructure, development and deployment practices while evangelising a strong data driven culture across all teams. You will be focused on making data accurate, accessible and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code. You will also build scalable, high performance data intensive services. We are a fast-growing company with leadership opportunities available to you as the team continues to expand.


What should you deeply care about?
SQL, GCP/AWS, BIG DATA INFRASTRUCTURE, DATA MODELLING, DATA ARCHITECTURE, QUERY PROCESSING

What would you be doing?

    • Create and maintain optimal data pipeline architecture.
    • Develop, recommend and implement process and procedure changes to systematically improve data integrity.
    • You will work with product managers, engineers and data scientists to experiment and build features driven by data and algorithms
    • Assemble large, complex data sets that meet functional / non-functional business requirements.
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
    • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
    • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
    • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
    • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Superpowers you’ll need to be a success in this role?

    • Experience writing high quality, maintainable SQL on large datasets.
    • Ability to write code in Python, Ruby, Scala or other platform-related Big data technology.
    • Expertise in Star Schema data modelling
    • Expertise in the design, creation and management of large datasets/data models.
    • Experience working on building /optimizing logical data model and data pipelines while delivering high data quality solutions that are testable and adhere to SLAs
    • Experience with AWS or GCP services including Big Query, Cloud Storage S3, Redshift, EMR and RDS
    • Experience in Cloud SQL and Cloud Bigtable
    • Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics
    • Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer
    • Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume)

What would you get from Mega?

    • The rare opportunity to join one of the hottest Indian startups with multi-million dollar funding. We are not just another gaming studio, but are creating the largest skilling platform in the world
    • Above market compensation. We believe to hire the top 100 talent of the country in any field (tech/product/design), we need to be a paymaster
    • Obvious sweet perks like daily catered breakfast and lunches, endless coffee, and dry snacks. We believe in enabling our employees by removing trivial tasks like packing breakfast/ordering lunch/going out for coffee etc
    • Swanky custom-built office space. It's a fact that any individual spends a major portion of their lives in office. There's no reason to cut corners in providing the best office space ever
    • Reimbursements. The company will reimburse any expense made by an employee to reach our goals. Period.
    • We are open to giving out any benefits which will further empower our employees to create the Mega which we collectively envision
GetMega is committed to building a diverse and inclusive company that celebrates and develops individuals of all backgrounds. We are an equal opportunity employer and encourage all applicants. 
Apply for this job

Location: Bangalore

Date posted: 2022-01-20