WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ THE ROLE: AMD is looking for a highly motivated product engineer to own and manage the customer experience for AMD's Software Solutions for ML inferencing. The successful candidate will:
- Facilitate adoption of AMD's ML inferencing solutions by facilitating customer on-boarding and delivering best-in-class customer experience
- Work with R&D, marketing, and key customers to drive the evolution of the product
- Support strategic business engagements leveraging AMD's ML inferencing solutions
We have competitive benefit packages and an award-winning culture. Join us! THE PERSON: The ideal candidate will be capable of communicating problems, solutions, and project status up to the executive level. You have the capability assuming the consultative or senior level responsibilities for large projects or product development initiatives, while dealing with internal and external groups on behalf of the group or project. This role allows the opportunity to influence technical decisions that have a significant impact on multiple products or the product line. KEY RESPONSIBILITIES:
- Evaluate end-to-end ML inferencing pipelines optimized for AMD's Embedded devices. This role involves conducting performance benchmarking, memory profiling, and bottleneck analysis in addition to extensive flow, and usability, analysis for model inferencing flows.
- Develop use cases, examples, tutorials, and methodology collaterals for AI inference software components and ecosystems, including Compiler, Quantizer, Optimizer, Runtime, Profiler, Visualizer, Debugger, and more targeting embedded devices.
- Engage with engineering teams to collaborate on product development, analyze product/feature specifications and usability to provide early feedback, and understand product/feature usage holistically to identify potential customer pain points.
- Interact with internal and external customers to understand their issues, assist them in debugging their workflows to meet critical model and flow-specific requirements, and create use cases to reproduce issues, driving successful adoption.
- Collaborate with the sales and marketing teams on strategic business engagements.
- Work closely with R&D to prioritize issues and manage escalations.
PREFERRED EXPERIENCE:
- Experience working with AI/ML frameworks such as PyTorch and ONNX, especially in resource-constrained embedded environments.
- Proven expertise deploying, optimizing, and troubleshooting ML models on embedded hardware platforms including microcontrollers, custom SoCs, FPGAs, or similar devices.
- Hands-on experience integrating AI/ML workloads with embedded real-time operating systems (RTOS) or bare-metal environments, including firmware-level model execution.
- Familiarity with embedded system constraints such as limited compute, memory, and power, and an understanding of how to tailor ML solutions to meet these requirements.
- In-depth experience with state-of-the-art models in one or more domains: CNNs, Transformers, LLMs, or Generative AI, especially as run or adapted for embedded platforms.
- Demonstrated skills in ML model analysis (e.g., CNN, LLM), performance benchmarking, hardware/software bottleneck analysis, and accuracy testing on embedded devices.
- Experience in quantizing, pruning, and compressing ML networks for efficient deployment on embedded or edge devices.
- Experience in hardware/software co-design, including collaborating with embedded systems engineers to optimize end-to-end ML inference pipelines.
- Proficiency with programming languages and tools relevant for embedded and AI development (e.g., Python, C, C++, cross-compilation toolchains, debugging tools like JTAG/SWD, etc.).
- Familiarity with popular development and build tools such as Git, CMake, Make, Conda, Docker, VSCode, Bash, Linux, and platforms/tools specific to embedded systems.
- Demonstrated expertise in performance analysis and debugging across the software/hardware boundary, with a focus on embedded deployments.
- Ability to think from the end-user's perspective-including embedded product users-understand their requirements, and advocate for solutions that improve usability.
- Strong technical documentation skills, with the ability to clearly and concisely present solutions, features, and methodologies to stakeholders at all levels.
- Excellent analytical, problem-solving, and communication skills, including experience collaborating within cross-functional teams spanning hardware, software, and customer-facing roles.
ACADEMIC CREDENTIALS:
- Bachelor's or Master's in computer engineering or computer science or electrical engineering, or comparable disciplines
LOCATION:
- San Jose, CA OR
- Longmont, CO
#LI-JT1 #LI-HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
|