Featured Products
Hiwonder
Hiwonder MechDog Open-Source AI Robot Dog with AI Vision & Voice Interaction, Programmable with Scratch, Arduino, and Python
- 【Driven by Coreless Servos】MechDog is equipped with 8 high-speed coreless servos, providing high accuracy and robust force. Its leg linkage structure enables swift and precise walking.
- 【Inverse Kinematics for Flexible Movement】MechDog features built-in inverse kinematics that support real-time adjustments of walking direction and posture, resulting in more flexible and lifelike movements.
- 【Cross-Platform Control with Multiple Programming Options】MechDog supports control via PC software and a mobile app. It can be programmed using Python, Scratch, or Arduino, offering a variety of programming options.
- 【Extensive Expansion for Creativity】MechDog can be enhanced with various sensors and electronic modules. It is also compatible with LEGO components, allowing for a broad range of creative applications.
Hiwonder
Hiwonder JetRover JETSON Robot Car with AI Vision Robotic Arm, Support ROS1 & ROS2, with Large AI Model (ChatGPT), SLAM Mapping/Navigation, AI Voice Interaction, Intelligent Sorting
【Support ROS1 and ROS2 configuration】JetRover is compatible with ROS1 and ROS2. The main control solution has been upgraded to support three main control solutions: Raspberry Pi 5, Jetson Nano, and Jetson Orin Nano. Users can choose the adaptation solution according to their own needs.
【Smart ROS Robots Driven by AI】 JetRover is a professional robotic platform for ROS learning and development, powered by NVIDIA Jetson Nano and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training.
【SLAM Development and Diverse Configuration】JetRover is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
【High-performance Vision Robot Arm】JetRover includes a 6DOF vision robot arm, featuring intelligent serial bus servos with a torque of 35KG. A HD camera is positioned at the end of robot arm, which provides a first-person perspective for object grabbing tasks.
【Empowered by Large Al Model, Human-Robot Interaction Redefined】JetRover deploys multimodal models with ChatGPT at its core, integrating 3D vision and a 6-microphone array. This synergy enhances its perception, reasoning, and actuation capabilities, enabling advanced embodied AI applications and delivering natural, context-aware human-robot interaction.
【Robot Control Across Platforms】JetRover provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
Hiwonder
Hiwonder JetHexa ROS Hexapod Robot Kit Powered by Jetson Nano with Lidar Depth Camera Support SLAM Mapping and Navigation
- 【Powered by NVIDIA Jetson Nano】JetHexa is a hexapod robot powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration.
- 【SLAM Development and AI Application】Equipped with a 3D depth camera and Lidar, it achieves precise 2D mapping, multi-point navigation, TEB path planning, Lidar tracking, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
- 【Inverse Kinematics Algorithm】JetHexa can switch between tripod gait and ripple gait flexibly. It employs an inverse kinematics algorithm, allowing it to perform "moonwalking" with fixed speed and height. Furthermore, JetHexa allows for adjustable pitch angle, roll angle, direction, speed, height, and stride, giving you complete control over its movements. With self-balancing function, JetHexa can conquer complex terrains with ease.
- 【Robot Control Across Platforms】JetHexa provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetHexa to perform specific actions.
Hiwonder
Hiwonder JetAcker AI Robot Kit – NVIDIA Jetson-Powered ROS1/ROS2 Educational Coding Robot with multimodal AI model (ChatGPT), Voice Control, AI Vision Interaction & SLAM
【Driven by Al, powered by Jetson】 JetAcker is a high-performance educational robot developed for ROS learning scenarios. Equipped with Jetson Nano/Orin Nano/Orin NX controllers and compatible with both ROS1 and ROS2, it integrates deep learning frameworks with TensorRT acceleration, making it ideal for advanced Al applications such as SLAM and vision recognition.
【SLAM Development and Diverse Configuration】JetAcker is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto, cartographer and RRT, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
【Empowered by Large Al Model, Human-Robot Interaction Redefined】 JetAcker deploys multimodal models with ChatGPT at its core, integrating 3D vision and a 6-microphone array. This synergy enhances its perception, reasoning, and actuation capabilities, enabling advanced embodied AI applications and delivering natural, context-aware human-robot interaction.
【Classical Ackermann Steering Mechanism】 The Ackermann chassis combines maneuverability and steering precision, facilitating the learning and validation of real-world vehicle steering principles. This design enables realistic simulation of autonomous driving scenarios for enhanced educational experiences.
【Comprehensive Learning Tutorials】 Through JetAcker's structured curriculum, master cutting-edge technologies including ROS development, SLAM mapping and navigation, 3D depth vision, OpenCV, YOLOv8, MediaPipe, Large Al model integration, Movelt and Gazebo simulation, and voice interaction.
Supported by extensive documentation and video tutorials, our progressive learning system breaks down complex concepts into digestible modules, guiding you from fundamentals to advanced implementations-empowering you to build your own intelligent robotic systems.
Hiwonder
Hiwonder PuppyPi ROS Quadruped Robot with Raspberry Pi, Integrated with Large AI Model (ChatGPT), Supports AI Vision, Voice Interaction, LiDAR, and Robotic Arm Attachment
【Raspberry Pi Powered & ROS1/ROS2】 PuppyPi is a high-performance AI vision robot dog designed for AI education. It is equipped with the Raspberry Pi 5 and fully supports both ROS1 and ROS2 environments. With Python programming, PuppyPi offers efficient AI computation and a wide range of robotic applications. We provide access to all source code and detailed documentation to help you create your own AI robot dog!
【AI Large Model Integration & Enhanced Human-Robot Interaction】 PuppyPi integrates a multimodal model, featuring ChatGPT at its core for advanced human-robot interaction. Combined with AI vision, it excels in perception, reasoning, and action, creating a more natural and flexible interaction experience!
【High-Torque Smart Servos & Inverse Kinematics】 PuppyPi is equipped with 8 high-torque stainless steel gear servos, offering faster response times and stable output. The robot's legs use a link structure design combined with inverse kinematics algorithms to enable coordinated multi-joint movement and precise motion control.
【AI Vision Recognition & Tracking】 PuppyPi features a high-definition camera that enables a variety of AI vision capabilities, including color recognition, target tracking, face detection, ball kicking, line following, and MediaPipe gesture control.
【Lidar & Robotic Arm Expansion】 PuppyPi supports TOF Lidar and robotic arm expansion. It can perform 360° environmental scanning, SLAM navigation, and dynamic obstacle avoidance. Additionally, it can precisely grasp objects, opening up opportunities for advanced AI applications.
Hiwonder
Hiwonder JetArm ROS1/ROS2 3D Vision Robot Arm, with Multimodal AI Model (ChatGPT), AI Voice Interaction and Vision Recognition, Tracking & Sorting
【AI-Driven and Jetson-Powered】 JetArm is a high-performance 3D vision robot arm developed for ROS education scenarios. It is equipped with the Jetson Nano, Orin Nano, or Orin NX as the main controller, and is fully compatible with both ROS1 and ROS2. With Python and deep learning frameworks integrated, JetArm is ideal for developing sophisticated AI projects.
【High-Performance AI Robotics】JetArm features six intelligent serial bus servos with a torque of 35KG. The robot is equipped with a 3D depth camera, a built-in 6-microphone array, and Multimodal Large AI Models, enabling a wide variety of applications, such as 3D spatial grabbing, target tracking, object sorting, scene understanding, and voice control.
【Depth Point Cloud, 3D Scene Flexible Grabbing】 JetArm is equipped with a high-performance 3D depth camera. Based on the RGB data, position coordinates and depth information of the target, combined with RGB+D fusion detection, it can realize free grabbing in 3D scene and other AI projects.
【Enhanced Human-Robot Interaction Powered by AI】 JetArm leverages Multimodal Large AI Models to create an interactive system centered around ChatGPT. Paired with its 3D vision capabilities, JetArm boasts outstanding perception, reasoning, and action abilities, enabling more advanced embodied AI applications and delivering a natural, intuitive human-robot interaction experience.
【Advanced Technologies & Comprehensive Tutorials】 With JetArm, you will master a broad range of cutting-edge technologies, including ROS development, 3D depth vision, OpenCV, YOLOv8, MediaPipe, AI models, robotic inverse kinematics, MoveIt, Gazebo simulation, and voice interaction. We provide in-depth learning materials and video tutorials to guide you step by step, ensuring you can confidently develop your own AI-powered robotic arm.
Hiwonder
Hiwonder JetMax Pro JETSON NANO Robot Arm with Mecanum Wheel Chassis/ Electric Sliding Rail Support ROS Python
- Powered by Jetson Nano(included)
- Open source and based on ROS
- Deep learning, model training, inverse kinematics
- Abundant sensors for function expansion
- Changeable robot models with mecanum wheel chassis or sliding rail
Hiwonder
Hiwonder JetAuto AI Robot Kit – NVIDIA Jetson-Powered ROS1/ROS2 Educational Robot with multimodal AI model (ChatGPT), Voice Control, SLAM & AI Vision
【Driven by Al, powered by Jetson】 JetAuto is a high-performance educational robot developed for ROS learning scenarios. Equipped with Jetson Nano/Orin Nano/Orin NX controllers and compatible with both ROS1 and ROS2, it integrates deep learning frameworks with TensorRT acceleration, making it ideal for advanced Al applications such as SLAM and vision recognition.
【SLAM Development and Diverse Configuration】JetAuto is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto, cartographer and RRT, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
【Empowered by Large Al Model, Human-Robot Interaction Redefined】 JetAuto deploys multimodal models with ChatGPT at its core, integrating 3D vision and a 6-microphone array. This synergy enhances its perception, reasoning, and actuation capabilities, enabling advanced embodied AI applications and delivering natural, context-aware human-robot interaction.
【Robot Control Across Platforms】 JetAuto provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetAuto to perform specific actions.
【Comprehensive Learning Tutorials】 Through JetAuto's structured curriculum, master cutting-edge technologies including ROS development, SLAM mapping and navigation, 3D depth vision, OpenCV, YOLOv8, MediaPipe, Large Al model integration, Movelt and Gazebo simulation, and voice interaction.
Supported by extensive documentation and video tutorials, our progressive learning system breaks down complex concepts into digestible modules, guiding you from fundamentals to advanced implementations-empowering you to build your own intelligent robotic systems.
Hiwonder
Hiwonder TonyPi Pro AI Humanoid Robot with Raspberry Pi 5 – Integrated Multimodal AI Model (ChatGPT), AI Vision Tracking, Voice Interaction, and Hand-Eye Coordination
【Al-Driven and Raspberry Pi Powered】 TonyPi Pro is a high-performance AI vision robot designed for AI education applications. It is powered by the Raspberry Pi 5, integrated with an OpenCV image processing library and robotic inverse kinematics algorithms. Offering open-source access, TonyPi Pro provides a flexible development environment that supports advanced AI robotics development.
【AI Large Model Integration for Enhanced Human-Machine Interaction】 TonyPi Pro incorporates a multimodal model, with ChatGPT at the core of its interaction system. Coupled with AI vision and voice interaction, TonyPi Pro excels in perception, reasoning, and action, enabling advanced embodied AI applications and delivering a seamless, intuitive human-machine interaction experience!
【High-Voltage Intelligent Bus Servos】 Equipped with 18 high-voltage intelligent bus servos, TonyPi Pro offers rapid response times and stable output, enabling precise multi-joint coordination and complex motion control. This ensures accurate humanoid postures and interactive movements to meet a variety of demands.
【Upgraded Hand-Eye Coordination & Dynamic Gait】 TonyPi Pro features enhanced open-close robotic hands and AI-powered vision, allowing it to grasp and transport objects with greater flexibility. Its advanced gait system supports a wider range of motion, enabling tasks like autonomous hurdle-crossing and stair climbing—ideal for exploring creative AI applications.
【Comprehensive Learning Resources】 TonyPi offers a rich array of educational content, including resources on robotic motion control, OpenCV, deep learning, MediaPipe, AI large models, voice interaction, and sensor applications. We provide extensive learning materials and video tutorials to guide you from foundational concepts to advanced practices, helping you develop your own AI humanoid robot.
Hiwonder
AiNex ROS Education AI Vision Humanoid Robot Powered by Raspberry Pi Biped Inverse Kinematics Algorithm Learning Teaching Kit
- 【High-performance Hardware Configurations】AiNex is developed upon Robot Operating System(ROS) and featuring a Raspberry Pi, 24 intelligent serial bus servos, an HD camera, movable mechanical hands. It is a professional AI humanoid robot capable of lively mimicking human actions.
- 【Advanced Inverse Kinematics Gait】AiNex integrates inverse kinematics algorithm for flexible pose control as well as gait planning for omnidirectional movement.
- 【Outstanding AI Vision Recognition and Tracking】Leveraging technologies, like machine vision and OpenCV, AiNex excels in precise object recognition, enabling it to accomplish target.
- 【Robot Control Across Platforms】AiNex provides multiple control methods, like WonderROS app (compatible with iOS and Android system), wireless handle, and PC software.
- 【Detailed Tutorials and Professional After-sales Service】We offer an extensive collection of tutorials covering up to 18 topics.
- 【Raspberry Pi AI Robot】An ideal platform for conducting research in motion control for hexapod robots, machine vision, OpenCV, deep learning, and various other fields.
- 【Loaded AI Vision Robot Arm】The added 5DOF vision robot arm empowers SpiderPi Pro to accurately locate, track, pick up, sort, and stack target objects precisely.
- 【AI Vision for Infinite Creativity】Feature a wide-angle HD camera and use OpenCV for various AI vision applications such as transportation.
- 【Support App Control】 Enjoy remote control of robot movements and access a live camera feed for first person view.
- 【Inverse Kinematics, Various Gait Modes】SpiderPi Pro utilizes inverse kinematics to enable a range of gait modes of tripod and quadruped gaits. And it offers adjustable height and speed, as well as the ability to make turns and change its motion direction.
- 【Driven by AI, Powered by NVIDIA Jetson Nano】JetMax is an open source AI robotic arm developed based on ROS. It is based on the Jetson Nano control system, supports Python programming, adopts mainstream deep learning frameworks, and can realize a variety of AI artificial intelligence applications.
- 【AI Vision, Deep Learning】The end of JetMax is equipped with a high-definition camera, which can realize FPV video transmission. Image processing through OpenCV can recognize colors, faces, gestures, etc. Through deep learning, JetMax can realize image recognition and item handling.
- 【Inverse Kinematics Algorithm】JetMax uses an inverse kinematics algorithm to accurately track, grab, sort and palletize target items in the field of view. Hiwonder will provide inverse kinematics analysis courses, connected coordinate system DH model and inverse kinematics function source code.
- 【Multiple Expansion Methods】You can purchase additional McNamee wheel chassis or slide rails to expand your JetMax, expand the range of motion of JetMax, and do more interesting AI projects.
- 【Detailed Course Materials and Professional After-sales Service】We provide 200+ courses and provide online technical assistance (China time) to help you learn JetMax more efficiently! Course content includes: introduction to the use of JetMax, ROS and OpenCV series courses, AI deep learning courses, inverse kinematics courses, action group editing teaching courses, and creative gameplay courses. Note: Hiwonder only provides technical assistance for existing courses, and more in-depth development needs to be completed by customers themselves.
Raspberry Pi
Hiwonder TonyPi AI Humanoid Robot Powered by Raspberry Pi 5 with Multimodal Model (ChatGPT) Integration, AI Vision, and Voice Interaction
【Al-Driven and Raspberry Pi Powered】 TonyPi is a high-performance AI vision robot designed for AI education applications. It is powered by the Raspberry Pi 5, integrated with an OpenCV image processing library and robotic inverse kinematics algorithms. Offering open-source access, TonyPi provides a flexible development environment that supports advanced AI robotics development.
【AI Large Model Integration for Enhanced Human-Machine Interaction】 TonyPi incorporates a multimodal model, with ChatGPT at the core of its interaction system. Coupled with AI vision and voice interaction, TonyPi excels in perception, reasoning, and action, enabling advanced embodied AI applications and delivering a seamless, intuitive human-machine interaction experience!
【High-Voltage Intelligent Bus Servos】Equipped with 16 high-voltage intelligent bus servos, TonyPi offers rapid response times and stable output, enabling precise multi-joint coordination and complex motion control. This ensures accurate humanoid postures and interactive movements to meet a variety of demands.
【AI Vision Recognition and Tracking】 TonyPi's 2DOF head is fitted with an HD camera that provides a wide field of view. It supports a range of AI vision capabilities, including color recognition, target tracking, ball kicking, line following, and MediaPipe-based motion control for interactive AI applications.
【Comprehensive Learning Resources】 TonyPi offers a rich array of educational content, including resources on robotic motion control, OpenCV, deep learning, MediaPipe, AI large models, voice interaction, and sensor applications. We provide extensive learning materials and video tutorials to guide you from foundational concepts to advanced practices, helping you develop your own AI humanoid robot.
- 【Raspberry Pi AI Robot】An ideal platform for conducting research in motion control for hexapod robots, machine vision, OpenCV, deep learning, and various other fields.
- 【AI Vision for Infinite Creativity】Feature a wide-angle HD camera and use OpenCV for various AI vision applications such as transportation.
- 【Support App Control】 Enjoy remote control of robot movements and access a live camera feed for first person view.
- 【Inverse Kinematics, Various Gait Modes】SpiderPi utilizes inverse kinematics to enable a range of gait modes of tripod and quadruped gaits. And it offers adjustable height and speed, as well as the ability to make turns and change its motion direction.
- 【ROS Robot Arm Powered by Raspberry Pi】ArmPi FPV is an open-source AI robot arm based on Robot Operating System and powered by Raspberry Pi. Loaded with high-performance intelligent servos and AI camera, and programmable using Python, it is capable of vision recognition and gripping.
- 【AI Vision Recognition and Tracking】A HD wide-angle camera is positioned at the end of ArmPi FPV, providing real-time First-Person View (FPV) transmission with a resolution of 100W pixels. By processing images with OpenCV, it can recognize color, tag and human face, opening up a wide range of AI applications, such as color sorting, target tracking, intelligent stacking and face detection.
- 【Inverse Kinematics Algorithm】ArmPi FPV employs an inverse kinematics algorithm, enabling precise target tracking and gripping within. It also provides detailed analysis on inverse kinematics, DH model, and offers the source code for the inverse kinematics function.
- 【Robot Control Across Platforms】ArmPi FPV provides multiple control methods, like WonderPi app (compatible with iOS and Android system), wireless handle, mouse, PC software and Robot Operating System, allowing you to control the robot at will.
- 【Abundant AI Applications】Guided by intelligent vision, ArmPi FPV excels in executing functions such as stocking in, stocking out, and stock transfer, realizing integration into Industry 4.0 environments.
Hiwonder
JetAuto Pro ROS1 ROS2 Robot Car with Vision Robotic Arm Powered by Jetson Nano Support SLAM Mapping/ Navigation/ Python
- 【Smart ROS Robots Driven by AI】 JetAuto Pro is a professional robotic platform for ROS learning and development, powered by NVIDIA Jetson Nano and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training.
- 【SLAM Development and Diverse Configuration】JetAuto Pro is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
- 【High-performance Vision Robot Arm】JetAuto Pro includes a 6DOF vision robot arm, featuring intelligent serial bus servos with a torque of 35KG. An HD camera is positioned at the end of robot arm, which provides a first-person perspective for object gripping tasks.
- 【Far-field Voice Interaction】JetAuto Pro advanced kit incorporates a 6-microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, 360° sound source localization, voice-controlled mapping navigation, etc. Integrated with vision robot arm, JetAuto Pro can implement voice-controlled gripping and transporting.
- 【Robot Control Across Platforms】JetAuto Pro provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.