add wishlist add wishlist show wishlist add compare add compare show compare preloader icon-theme-126 icon-theme-161 icon-theme-138 icon-theme-027 electro-thumb electro-return-icon electro-brands-icon electro-payment
  • +86 18825289328

ROS Robot

Showing all 13 items
  • 【Driven by AI, powered by Jetson Nano】JetArm is a desktop-level AI vision robot arm developed for ROS education scenarios. It supports Python programming, uses deep learning frameworks, and is developed in conjunction with MediaPipe to implement AI creative projects such as object image recognition, somatosensory control, and voice interaction.
  • 【High-performance Vision Robot Arm】JetArm includes a 6DOF vision robot arm, featuring intelligent serial bus servos with a torque of 35KG. An HD camera is positioned at the end of robot arm, which provides a first-person perspective for object gripping tasks.
  • 【Depth Point Cloud, 3D Scene Flexible Grabbing】JetArm is equipped with a high-performance 3D depth camera. Based on the RGB data, position coordinates and depth information of the target, combined with RGB+D fusion detection, it can realize free grabbing in 3D scene and other AI projects.
  • 【Far-field Voice Interaction】JetArm ultimate kit incorporates a circular microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, 360° sound source localization, voice-controlled mapping navigation, etc. Integrated with vision robot arm, JetArm can implement voice-controlled gripping and transporting.
  • 【Robot Control Across Platforms】JetArm provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
  • 【Smart ROS Robots Driven by AI】 JetRover is a professional robotic platform for ROS learning and development, powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training.
  • 【SLAM Development and Diverse Configuration】JetRover is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
  • 【High-performance Vision Robot Arm】JetRover includes a 6DOF vision robot arm, featuring intelligent serial bus servos with a torque of 35KG. A HD camera is positioned at the end of robot arm, which provides a first-person perspective for object grabbing tasks.
  • 【Far-field Voice Interaction】JetRover ultimate kit incorporates a circular microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, 360° sound source localization, voice-controlled mapping navigation, etc. Integrated with vision robot arm, JetRover can implement voice-controlled grabbing and transporting.
  • 【Robot Control Across Platforms】JetRover provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
  • 【High-performance Hardware Configurations】AiNex is developed upon Robot Operating System(ROS) and featuring a Raspberry Pi 4B, 24 intelligent serial bus servos, an HD camera, movable mechanical hands. It is a professional AI humanoid robot capable of lively mimicking human actions.
  • 【Advanced Inverse Kinematics Gait】AiNex integrates inverse kinematics algorithm for flexible pose control as well as gait planning for omnidirectional movement.
  • 【Outstanding AI Vision Recognition and Tracking】Leveraging technologies, like machine vision and OpenCV, AiNex excels in precise object recognition, enabling it to accomplish target.
  • 【Robot Control Across Platforms】AiNex provides multiple control methods, like WonderROS app (compatible with iOS and Android system), wireless handle, and PC software.
  • 【Detailed Tutorials and Professional After-sales Service】We offer an extensive collection of tutorials covering up to 18 topics.
  • 【Classic Ackermann Steering Structure】JetAcker is built upon an Ackermann chassis providing an opportunity to learn and validate robots based on the Ackermann steering structure.
  • 【Dive in AI Algorithm】JetAcker runs on NVIDIA Jetson Nano B01, supports ROS, utilizes deep learning frameworks, MediaPipe, YOLO training, and TensorRT acceleration for diverse 3D machine vision applications.
  • 【SLAM Development and Diverse Configuration】JetAcker is equipped with a 3D depth camera and Lidar, enabling remote communication, precise 2D mapping navigation, TEB path planning, and dynamic obstacle avoidance.
  • 【High-performance Hardware Configurations】JetAcker features an aluminum alloy bracket, CNC steering, and a range of hardware: 100mm rubber wheels, 520 Hall encoder motors, Lidar, ORBBEC Astra Pro Plus camera, 240° pan-tilt, multi-functional expansion board, motor driver, and 11.1V 6000mAh Lipo battery.
  • 【Robot Control Across Platforms】JetAcker provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard.
  • 【Smart ROS Robots Driven by AI】 JetTank is powered by NVIDIA Jetson Nano B01 and supports ROS. It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training. This combination delivers 3D machine vision applications, including autonomous driving, somatosensory interaction and KCF target tracking.
  • 【SLAM Development and Diverse Configuration】JetTank is equipped with a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
  • 【High-performance Hardware Configurations】JetTank is made of aluminum alloy and employs various hardware components, including reinforced nylon continuous track, 520 Hall encoder gear motors, metal drive wheel, Lidar, Astra Pro Plus depth camera, 6-microphone array, speaker, etc.
  • 【Far-field Voice Interaction】JetTank advanced kit incorporates a 6-microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, voice wake-up, 360° sound source localization, voice-controlled mapping navigation, etc.
  • 【Robot Control Across Platforms】JetTank provides multiple control methods, like WonderAi app (iOS&Android), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
  • Powered by Jetson Nano(included)
  • Open source and based on ROS
  • Deep learning, model training, inverse kinematics
  • Abundant sensors for function expansion
  • Changeable robot models with mecanum wheel chassis or sliding rail
  • 【Smart ROS Robots Driven by AI】 JetAuto Pro is a professional robotic platform for ROS learning and development, powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training.
  • 【SLAM Development and Diverse Configuration】JetAuto Pro is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
  • 【High-performance Vision Robot Arm】JetAuto Pro includes a 5DOF vision robot arm, featuring intelligent serial bus servos with a torque of 35KG. An HD camera is positioned at the end of robot arm, which provides a first-person perspective for object gripping tasks.
  • 【Far-field Voice Interaction】JetAuto Pro advanced kit incorporates a 6-microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, 360° sound source localization, voice-controlled mapping navigation, etc. Integrated with vision robot arm, JetAuto Pro can implement voice-controlled gripping and transporting.
  • 【Robot Control Across Platforms】JetAuto Pro provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
  • 【Smart ROS Robots Driven by AI】JetAuto is powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration. This combination delivers a diverse range of 3D machine vision applications, including autonomous driving, human feature recognition, and KCF target tracking.
  • 【SLAM Development and Diverse Configuration】JetAuto is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto, cartographer and RRT, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.JetAuto offers two Lidar options: SLAMTEC A1 and EAI G4 (advanced).
  • 【Far-field Voice Interaction】JetAuto advanced kit incorporates a 6-microphone array and speaker allowing for fascinating man-robot interaction applications, including Text to Speech conversion, voice wake-up, 360° sound source localization, offline voice recognition, voice-controlled mapping navigation, etc.
  • 【Robot Control Across Platforms】JetAuto provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetAuto to perform specific actions. Please note that navigation app is exclusively available for Android system.
  • 【Omni-directional movement, first person view】The chassis is equipped with 4 high-performance encoder geared motors and 4 omni-directional mecanum wheels, ArmPi Pro can realize 360° movement. Combined with HD camera ending in robot arm, it can provide first person view.
  • 【Powerful Control System】RaspberryPi CM4 makes breakthrough in processor speed, multimedia performance, memory and connection. The combination of RaspberryPi 4B and RaspberryPi expansion board significantly enhances ArmPi Pro’s AI performance!
  • 【AI Vision Recognition, Target Tracking】ArmPi Pro takes OpenCV as image processing library and utilizes FPV camera to recognize and locate the target block so as to realize color sorting, target tracking, line following, and other AI games.
  • 【APP Control, FPV Transmitted Image】Android and iOS APP are available for robot remote control. Via the APP, you can control the robot in real time and switch various AI games just by one tap.
  • 【Powered by NVIDIA Jetson Nano】JetHexa is a hexapod robot powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration.
  • 【SLAM Development and AI Application】Equipped with a 3D depth camera and Lidar, it achieves precise 2D mapping, multi-point navigation, TEB path planning, Lidar tracking, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
  • 【Inverse Kinematics Algorithm】JetHexa can switch between tripod gait and ripple gait flexibly. It employs an inverse kinematics algorithm, allowing it to perform "moonwalking" with fixed speed and height. Furthermore, JetHexa allows for adjustable pitch angle, roll angle, direction, speed, height, and stride, giving you complete control over its movements. With self-balancing function, JetHexa can conquer complex terrains with ease.
  • 【Robot Control Across Platforms】JetHexa provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetHexa to perform specific actions.
  • 【Driven by Raspberry Pi and Coreless Servos】PuppyPi is an AI vision quadruped robot driven by Raspberry Pi CM4 4GB and built on the Robot Operating System (ROS). It is equipped with 8 stainless steel coreless servos, delivering high-precision performance, rapid rotation speed, and a robust torque of 8KG.cm. With an IMU sensor, PuppyPi Pro can detect its posture in real-time, enabling self-balancing capabilities.
  • 【AI Vision, Unlimited Creativity】PuppyPi Pro is equipped with an HD wide-angle camera boasting 100W-pixel resolution. It utilizes OpenCV library for efficient image processing, enabling a diverse range of AI applications, including target recognition and localization, line following, obstacle avoidance, face detection, ball shooting, color tracking and tag recognition.
  • 【Various Control Methods and FPV Live Camera Feed】You can conveniently control PuppyPi Pro through WonderPi app available for Android and iOS devices, PC software, or a wireless PS2 handle. Additionally, PuppyPi Pro provides a first-person perspective experience by transmitting the live camera feed to the app.
  • 【Gait Planning, Free Adjustment】PuppyPi Pro incorporates inverse kinematics algorithm offering precise control over the touch time, lift time, and lifted height of each leg. You can easily adjust these parameters to achieve different gaits, including ripple and trot. Additionally, PuppyPi Pro provides detailed analysis of inverse kinematics, along with the source code for the inverse kinematics function.
  • 【Lidar Expansion】PuppyPi offers the option to expand its capabilities with a mini TOF lidar. This Lidar sensor enables PuppyPi to perform a 360-degree laser scanning of its environment, and execute various advanced functions such as synchronous localization, autonomous mapping, multi-point navigation, TEB path planning, dynamic obstacle avoidance, and advanced SLAM capabilities.
  • 【Driven by AI, Powered by NVIDIA Jetson Nano】JetMax is an open source AI robotic arm developed based on ROS. It is based on the Jetson Nano control system, supports Python programming, adopts mainstream deep learning frameworks, and can realize a variety of AI artificial intelligence applications.
  • 【AI Vision, Deep Learning】The end of JetMax is equipped with a high-definition camera, which can realize FPV video transmission. Image processing through OpenCV can recognize colors, faces, gestures, etc. Through deep learning, JetMax can realize image recognition and item handling.
  • 【Inverse Kinematics Algorithm】JetMax uses an inverse kinematics algorithm to accurately track, grab, sort and palletize target items in the field of view. Hiwonder will provide inverse kinematics analysis courses, connected coordinate system DH model and inverse kinematics function source code.
  • 【Multiple Expansion Methods】You can purchase additional McNamee wheel chassis or slide rails to expand your JetMax, expand the range of motion of JetMax, and do more interesting AI projects.
  • 【Detailed Course Materials and Professional After-sales Service】We provide 200+ courses and provide online technical assistance (China time) to help you learn JetMax more efficiently! Course content includes: introduction to the use of JetMax, ROS and OpenCV series courses, AI deep learning courses, inverse kinematics courses, action group editing teaching courses, and creative gameplay courses. Note: Hiwonder only provides technical assistance for existing courses, and more in-depth development needs to be completed by customers themselves.
  • 【ROS Robot Arm Powered by Raspberry Pi】ArmPi FPV is an open-source AI robot arm based on Robot Operating System and powered by Raspberry Pi. Loaded with high-performance intelligent servos and AI camera, and programmable using Python, it is capable of vision recognition and gripping.
  • 【AI Vision Recognition and Tracking】A HD wide-angle camera is positioned at the end of ArmPi FPV, providing real-time First-Person View (FPV) transmission with a resolution of 100W pixels. By processing images with OpenCV, it can recognize color, tag and human face, opening up a wide range of AI applications, such as color sorting, target tracking, intelligent stacking and face detection.
  • 【Inverse Kinematics Algorithm】ArmPi FPV employs an inverse kinematics algorithm, enabling precise target tracking and gripping within. It also provides detailed analysis on inverse kinematics, DH model, and offers the source code for the inverse kinematics function.
  • 【Robot Control Across Platforms】ArmPi FPV provides multiple control methods, like WonderPi app (compatible with iOS and Android system), wireless handle, mouse, PC software and Robot Operating System, allowing you to control the robot at will.
  • 【Abundant AI Applications】Guided by intelligent vision, ArmPi FPV excels in executing functions such as stocking in, stocking out, and stock transfer, realizing integration into Industry 4.0 environments.
Light
Dark