Opportunity:
KlearNow is achieving an unprecedented transformation in the growth of our businesses, rethinking the way we engage with customers and partners, and how the world’s trade flows across our global network.
Our software and technology are center stage in creating value for our business and our customers. In this Data Engineer role, you will be working within the Data AI & Data Analytics business platform as part of the forecasting and modelling data science team, developing software and systems for real-time dynamic pricing and related revenue management needs.
We Offer
This position offers a unique opportunity to develop and apply your cutting-edge knowledge of software and data engineering to create results and insights that are transforming the transport and logistics industry. As a Data Engineer with KlearNow, you will be part of the community of data & engineering practitioners across the company, where we develop the foundations of our future business.
- We operate in a fast-paced environment utilizing modern technologies.
- We embrace innovation methods where we have a close dialogue with end users, make early use of mock-ups & POCs and are committed to incremental development.
- We value customer outcomes and are passionate about using technology to solve problems.
- We are a diverse team with colleagues from different backgrounds and cultures.
- We offer the freedom, and responsibility, to shape the setup and the processes we use in our community.
- We support continuous learning, including through conferences, workshops, and meetups.
This is an extremely exciting time to join a dynamic team that solves some of the toughest problems in the industry and builds the future of trade & logistics. KlearNow’s Technology organization offers a unique opportunity to impact global trade. We focus on our people and the right candidate will have broad possibilities to further develop competencies in an environment characterized by change and continuous progress.
Key Responsibilities
You will be part of a strong data science team that also implements its work in software, shaping product and business decisions to help drive digital.
Your responsibilities include:
- Help design, architect and implement new and existing data pipelines as a core foundation for both our data science R&D as well as real-time systems deployed to production.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
- Deeply understand our customer and business problems and apply your knowledge of large-scale data technologies to deliver the outcomes our customers need. We care about team members who are not only technical specialists but are excited about understanding the logistics domain—no prior experience in logistics required, so long as you are committed to learning!
- Work with our internal platform engineering teams to develop the functionality of our future data systems.
- Independently direct your time and resources together with other data/software engineers and the data scientists in the team. You will need to participate in transforming the way we work. The cultural transformation of our business and industry is the guarantee of long-term growth and success.
Who we are looking for
- You possess a bachelor’s degree, ideally in computer science or engineering, or closely related field. The ideal person will possess an advanced degree with a focus on distributed systems, cloud technologies or ML/AI.
- More than 4 years industry data engineering or software development experience, in either a software or analytics-intensive industrial company, as part of a team developing software. This is a hands-on development role.
- Understanding and experience with the engineering of batch and real-time data science solutions, especially the underlying large-scale data systems they rely on and data pipelines.
- Develop and maintain databases, data systems – reorganizing data into a readable format
- Track record of delivery using languages like Java, Scala, Python, PySpark, and SparkSQL. You have experience contributing hands-on to products built within the Spark ecosystem.
- Working knowledge of technologies like:
- Apache Hadoop, Spark & structured streaming, pub/sub systems (e.g. Kafka, Kinesis and EventHubs), Docker, Jenkins, Kubernetes and cloud technologies, e.g. Azure, GCP or AWS.
- More generally you have experience/familiarity with many of the following that we use/practice:
- CI/CD, DevOps, microservices, monitoring, git, source control, effective documentation
- Python, SQL, Spark/Databricks, SQLAlchemy, DataDog, PowerBI, NoSQL
- Pub/sub systems, ETL, data modelling, data marts, data lakes, data quality, BI/dimensional modelling