ROS Robot
Hiwonder
Hiwonder JetAuto AI Robot Kit – NVIDIA Jetson-Powered ROS1/ROS2 Educational Robot with multimodal AI model (ChatGPT), Voice Control, SLAM & AI Vision
【Driven by Al, powered by Jetson】 JetAuto is a high-performance educational robot developed for ROS learning scenarios. Equipped with Jetson Nano/Orin Nano/Orin NX controllers and compatible with both ROS1 and ROS2, it integrates deep learning frameworks with TensorRT acceleration, making it ideal for advanced Al applications such as SLAM and vision recognition.
【SLAM Development and Diverse Configuration】JetAuto is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto, cartographer and RRT, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
【Empowered by Large Al Model, Human-Robot Interaction Redefined】 JetAuto deploys multimodal models with ChatGPT at its core, integrating 3D vision and a 6-microphone array. This synergy enhances its perception, reasoning, and actuation capabilities, enabling advanced embodied AI applications and delivering natural, context-aware human-robot interaction.
【Robot Control Across Platforms】 JetAuto provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetAuto to perform specific actions.
【Comprehensive Learning Tutorials】 Through JetAuto's structured curriculum, master cutting-edge technologies including ROS development, SLAM mapping and navigation, 3D depth vision, OpenCV, YOLOv8, MediaPipe, Large Al model integration, Movelt and Gazebo simulation, and voice interaction.
Supported by extensive documentation and video tutorials, our progressive learning system breaks down complex concepts into digestible modules, guiding you from fundamentals to advanced implementations-empowering you to build your own intelligent robotic systems.
Hiwonder
Hiwonder ArmPi Pro Raspberry Pi 5 ROS Robotic Arm Developer Kit with 4WD Mecanum Wheel Chassis Open Source Robot Car
- 【Omni-directional movement, first person view】The chassis is equipped with 4 high-performance encoder geared motors and 4 omni-directional mecanum wheels, ArmPi Pro can realize 360° movement. Combined with HD camera ending in robot arm, it can provide first person view.
- 【Powerful Control System】RaspberryPi 4B/5 makes breakthrough in processor speed, multimedia performance, memory and connection. The combination of RaspberryPi 4B/5 and RaspberryPi expansion board significantly enhances ArmPi Pro's AI performance!
- 【AI Vision Recognition, Target Tracking】ArmPi Pro takes OpenCV as image processing library and utilizes FPV camera to recognize and locate the target block so as to realize color sorting, target tracking, line following, and other AI games.
- 【APP Control, FPV Transmitted Image】Android and iOS APP are available for robot remote control. Via the APP, you can control the robot in real time and switch various AI games just by one tap.
Hiwonder
Hiwonder JetHexa ROS Hexapod Robot Kit Powered by Jetson Nano with Lidar Depth Camera Support SLAM Mapping and Navigation
- 【Powered by NVIDIA Jetson Nano】JetHexa is a hexapod robot powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration.
- 【SLAM Development and AI Application】Equipped with a 3D depth camera and Lidar, it achieves precise 2D mapping, multi-point navigation, TEB path planning, Lidar tracking, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
- 【Inverse Kinematics Algorithm】JetHexa can switch between tripod gait and ripple gait flexibly. It employs an inverse kinematics algorithm, allowing it to perform "moonwalking" with fixed speed and height. Furthermore, JetHexa allows for adjustable pitch angle, roll angle, direction, speed, height, and stride, giving you complete control over its movements. With self-balancing function, JetHexa can conquer complex terrains with ease.
- 【Robot Control Across Platforms】JetHexa provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetHexa to perform specific actions.
Hiwonder
Hiwonder PuppyPi ROS Quadruped Robot with Raspberry Pi, Integrated with Large AI Model (ChatGPT), Supports AI Vision, Voice Interaction, LiDAR, and Robotic Arm Attachment
【Raspberry Pi Powered & ROS1/ROS2】 PuppyPi is a high-performance AI vision robot dog designed for AI education. It is equipped with the Raspberry Pi 5 and fully supports both ROS1 and ROS2 environments. With Python programming, PuppyPi offers efficient AI computation and a wide range of robotic applications. We provide access to all source code and detailed documentation to help you create your own AI robot dog!
【AI Large Model Integration & Enhanced Human-Robot Interaction】 PuppyPi integrates a multimodal model, featuring ChatGPT at its core for advanced human-robot interaction. Combined with AI vision, it excels in perception, reasoning, and action, creating a more natural and flexible interaction experience!
【High-Torque Smart Servos & Inverse Kinematics】 PuppyPi is equipped with 8 high-torque stainless steel gear servos, offering faster response times and stable output. The robot's legs use a link structure design combined with inverse kinematics algorithms to enable coordinated multi-joint movement and precise motion control.
【AI Vision Recognition & Tracking】 PuppyPi features a high-definition camera that enables a variety of AI vision capabilities, including color recognition, target tracking, face detection, ball kicking, line following, and MediaPipe gesture control.
【Lidar & Robotic Arm Expansion】 PuppyPi supports TOF Lidar and robotic arm expansion. It can perform 360° environmental scanning, SLAM navigation, and dynamic obstacle avoidance. Additionally, it can precisely grasp objects, opening up opportunities for advanced AI applications.
- 【Driven by AI, Powered by NVIDIA Jetson Nano】JetMax is an open source AI robotic arm developed based on ROS. It is based on the Jetson Nano control system, supports Python programming, adopts mainstream deep learning frameworks, and can realize a variety of AI artificial intelligence applications.
- 【AI Vision, Deep Learning】The end of JetMax is equipped with a high-definition camera, which can realize FPV video transmission. Image processing through OpenCV can recognize colors, faces, gestures, etc. Through deep learning, JetMax can realize image recognition and item handling.
- 【Inverse Kinematics Algorithm】JetMax uses an inverse kinematics algorithm to accurately track, grab, sort and palletize target items in the field of view. Hiwonder will provide inverse kinematics analysis courses, connected coordinate system DH model and inverse kinematics function source code.
- 【Multiple Expansion Methods】You can purchase additional McNamee wheel chassis or slide rails to expand your JetMax, expand the range of motion of JetMax, and do more interesting AI projects.
- 【Detailed Course Materials and Professional After-sales Service】We provide 200+ courses and provide online technical assistance (China time) to help you learn JetMax more efficiently! Course content includes: introduction to the use of JetMax, ROS and OpenCV series courses, AI deep learning courses, inverse kinematics courses, action group editing teaching courses, and creative gameplay courses. Note: Hiwonder only provides technical assistance for existing courses, and more in-depth development needs to be completed by customers themselves.
- 【ROS Robot Arm Powered by Raspberry Pi】ArmPi FPV is an open-source AI robot arm based on Robot Operating System and powered by Raspberry Pi. Loaded with high-performance intelligent servos and AI camera, and programmable using Python, it is capable of vision recognition and gripping.
- 【AI Vision Recognition and Tracking】A HD wide-angle camera is positioned at the end of ArmPi FPV, providing real-time First-Person View (FPV) transmission with a resolution of 100W pixels. By processing images with OpenCV, it can recognize color, tag and human face, opening up a wide range of AI applications, such as color sorting, target tracking, intelligent stacking and face detection.
- 【Inverse Kinematics Algorithm】ArmPi FPV employs an inverse kinematics algorithm, enabling precise target tracking and gripping within. It also provides detailed analysis on inverse kinematics, DH model, and offers the source code for the inverse kinematics function.
- 【Robot Control Across Platforms】ArmPi FPV provides multiple control methods, like WonderPi app (compatible with iOS and Android system), wireless handle, mouse, PC software and Robot Operating System, allowing you to control the robot at will.
- 【Abundant AI Applications】Guided by intelligent vision, ArmPi FPV excels in executing functions such as stocking in, stocking out, and stock transfer, realizing integration into Industry 4.0 environments.