About Zeta
Zeta is a
Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by
Bhavin Turakhia and Ramki Gaddipati in 2015.
Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 20M+ cards have been issued on our platform globally.
Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios.
Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US, EMEA, and Asia. We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.
About the Role
As a Data Engineer II, you will play a crucial role in developing, optimizing, and managing our company's data infrastructure, ensuring the availability and reliability of data for analysis and reporting.
Responsibilities
Database Design and Management: Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues.ETL (Extract, Transform, Load) Processes: Develop and maintain ETL processes to move and transform data between systems. Ensure the efficiency and reliability of data pipelines.Data Modeling: Create and update data models to represent the structure of the data.Data Warehousing: Build and manage data warehouses for storage and analysis of large datasets.Data Integration: Integrate data from various sources, including APIs, databases, and external data sets.Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies.Scripting and Programming: Develop and automate data processes through programming languages (e.g., Python, Java, SQL). Implement data validation scripts and error handling mechanisms.Version Control: Use version control systems (e.g., Git) to manage codebase changes for data pipelines.Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability.Cloud Platforms: Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics.Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations..Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis.Skills
Data Modeling and Architecture: Design and implement scalable and efficient data models, Develop and maintain conceptual, logical, and physical data models.ETL Development: Create, optimize, and maintain ETL processes to efficiently move data across systems, Implement data transformations and cleansing processes to ensure data accuracy and integrity. Data Warehouse Management: Contribute to the design and maintenance of data warehouses.Data Integration: Work closely with cross-functional teams to integrate data from various sources and Implement solutions for real-time and batch data integration.Data Quality and Governance: Establish and enforce data quality standards.Performance Tuning: Monitor and optimize database performance for large-scale data sets, troubleshoot and resolve issues related to data processing and storage.