Specialist Solutions Architect - Data Engineering
![]() | |
![]() | |
![]() United States, California, Foster City | |
![]() | |
About AI & Analytics: Artificial intelligence (AI) and the data it collects and analyzes will soon sit at the core of all committed, human-centric businesses. By decoding customer needs, preferences, and behaviors, our clients can understand exactly what services, products, and experiences their consumers need. Within AI & Analytics, we work to craft the future-a future in which trial-and-error business decisions have been replaced by informed choices and data-supported strategies. By applying AI and data science, we help leading companies to prototype, refine, validate, and scale their AI and analytics products and delivery models. Cognizant's AIA practice takes insights that are buried in data, and provides businesses a clear way to transform how they source, interpret and consume their information. Our clients need flexible data structures and a streamlined data architecture that quickly turns data resources into informational, relevant intelligence. A "Specialist Solutions Architect - Data Engineering" is a customer-facing role focused on designing and implementing complex data engineering solutions on a specific platform (like Databricks or AWS), leveraging deep expertise in big data technologies like Apache Spark to guide clients through the entire data pipeline development process, from data ingestion to model deployment, while acting as a technical leader to ensure successful project implementation and optimal platform utilization across various use cases. Key Responsibilities:
Design and architect robust data pipelines, including data ingestion, transformation, processing, and loading mechanisms, aligning with customer business needs and technical constraints.
Build and demonstrate proof-of-concept implementations to showcase the capabilities of the platform and address specific customer use cases. Implementation Support: Provide hands-on guidance during implementation phases, including technical troubleshooting, performance tuning, and code review. Deep Expertise in Big Data Technologies: Extensive experience with Apache Spark (PySpark, Scala), data processing frameworks, distributed computing concepts, and data lake technologies. Data Pipeline Design: Proven ability to design and build complex data pipelines using tools like Airflow, Luigi, or similar workflow management systems. Cloud Computing: Familiarity with cloud platforms like AWS, Azure, or GCP, including their data services and capabilities Communication and Presentation Skills: Superb communication skills to effectively explain complex technical concepts to both technical and non-technical customers. Potential Specializations:Streaming Data Processing: Expertise in real-time data processing technologies like Apache Kafka, Spark Streaming, or Kinesis
Knowledge of integrating data pipelines with machine learning models for predictive analytics Data Governance and Security: Understanding of data governance principles, data quality checks, and data security standard processes Experience:
Business Acumen:
Applications will be accepted until 4/11/2025. The annual salary for this position is between $125,000 - $205,000 depending on experience and other qualifications of the successful candidate. This position is also eligible for Cognizant's discretionary annual incentive program, based on performance and subject to the terms of Cognizant's applicable plans. Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law. |