Job Description
Research Intern
Job Location:  Santa Clara, California
Location Flexibility:  Multiple Locations in Country
Req Id:  5025
Posting Start Date:  1/16/26

At Fujitsu, we are driven by our purpose to make the world more sustainable by building trust in society through innovation.

We have been a pioneer in technology and innovation for over 80 years, and we are committed to using our expertise to help businesses and organizations transform for the digital age. We believe that digital transformation is essential to creating a more sustainable future. That's why we are working with our customers to develop solutions that can help them reduce their environmental impact, improve their efficiency, and create a more equitable society.

We are committed to contributing to the United Nations Sustainable Development Goals (SDGs). These goals are a blueprint for a better future for all, and we believe that technology can play a vital role in achieving them.

If you share our passion for making a meaningful impact on the world, we invite you to join our global family of 130,000 employees spanning more than 50 countries. We are a diverse workforce, and we offer a wide range of opportunities for you to grow and develop your career.

Together, we can create a more sustainable future for all.

 

Research Internship: System Design Automation

Fujitsu’s Converging Technology Laboratory (CT Lab) conducts applied research at the intersection of artificial intelligence, optimization, robotics, and system modeling to address complex industrial and societal challenges. Our work emphasizes rapid prototyping, interdisciplinary collaboration, and translating advanced AI research into executable system solutions

 

The Converging Laboratory is looking for an intern to support a research prototype that explores how robots can automatically interpret unstructured assembly manuals and convert them into executable task plans. The intern will contribute to building a “Manual-to-Motion” pipeline that leverages Vision-Language Models (VLMs), symbolic planning (PDDL/HTN), and robotic simulation to demonstrate document-driven robot programming. The internship is research-oriented and well-suited for graduate students interested in robotics, AI, and automated reasoning.

 

Job responsibilities:

  • Multimodal Knowledge Extraction: Assist in designing prompts and pipelines using Vision-Language Models to extract structured assembly steps from PDF manuals containing text and diagrams; Represent extracted knowledge using structured formats (e.g., JSON, HTN, or PDDL).
  • Symbolic Task Representation: Support the automatic generation and validation of PDDL domain and problem files from extracted assembly instructions; Test logical consistency and solvability using off-the-shelf planners.
  • Geometric Grounding & Simulation: Help associate symbolic actions with 3D CAD assets and simulation objects; Contribute to setting up and running robotic assembly demonstrations in simulation environments.
  • Prototype Integration: Participate in integrating semantic parsing, geometric grounding, and execution into an end-to-end open-loop pipeline; Evaluate generalization across different assembly manuals with varying levels of complexity.
  • Documentation & Communication: Document system design, assumptions, and limitations; Contribute to a final demo, presentation, or technical report summarizing results and research insights.

 

Required Qualifications:

  • Currently enrolled in a PhD program in Robotics, Computer Science, Mechanical Engineering, Electrical Engineering, or a related field.
  • Strong Python programming skills.
  • Familiarity with at least one of the following areas:
  • Robotics or motion planning
  • Computer vision or vision-language models
  • Symbolic planning (PDDL, HTN, task planning)
  • Simulation environments (Gazebo, MuJoCo, PyBullet, etc.)
  • Ability to work independently on research tasks with guidance and feedback.

Preferred Qualifications (Nice to Have):

  • Experience with large language models or multimodal models.
  • Exposure to robotics manipulation or task and motion planning (TAMP).
  • Familiarity with CAD formats (STL, STEP) and basic geometry concepts.
  • Interest in research prototyping, automation, or AI-driven manufacturing.

 

 

Fujitsu salaries are aligned to the specific geographic location in which the work is primarily performed. ​ It is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the circumstances of each situation. The pay range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to: specific skills, qualifications, experience, and comparison to other employees already in this role. The pay range for this position is estimated at $40/hr. to $65/hr. USD. Additionally, this role may be eligible for a short-term incentive based on company results and individual performance.

 

As a technology company, Fujitsu recognizes that human resources are its most important capital. To create an environment where all employees can work positively and healthily, both in mind and body, we offer a full range of health, 401K, and other benefits.​

Relocation Supported:  No
Visa Sponsorship Approved:  No

At Fujitsu, we are committed to creating a diverse and inclusive workplace where everyone feels valued and respected. We believe that diversity and inclusion are essential to our success, and we are committed to creating an environment where all employees can thrive.

We are an equal opportunity employer and qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, marital status, sexual orientation, gender identity or expression, genetic information, veteran status, or any other characteristic protected by law.

We believe that everyone has something to contribute, and we are committed to creating a workplace where everyone can reach their full potential.

California Consumer Privacy Act (CCPA), read here