add wishlist add wishlist show wishlist add compare add compare show compare preloader icon-theme-126 icon-theme-161 icon-theme-138 icon-theme-027 electro-thumb electro-return-icon electro-brands-icon electro-payment
  • +86 18825289328

Christmas Maker Gift

Showing all 8 items
  • 【Powered by NVIDIA Jetson Nano】JetHexa is a hexapod robot powered by NVIDIA Jetson Nano B01 and supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training, and utilizes TensorRT acceleration.
  • 【SLAM Development and AI Application】Equipped with a 3D depth camera and Lidar, it achieves precise 2D mapping, multi-point navigation, TEB path planning, Lidar tracking, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.
  • 【Inverse Kinematics Algorithm】JetHexa can switch between tripod gait and ripple gait flexibly. It employs an inverse kinematics algorithm, allowing it to perform "moonwalking" with fixed speed and height. Furthermore, JetHexa allows for adjustable pitch angle, roll angle, direction, speed, height, and stride, giving you complete control over its movements. With self-balancing function, JetHexa can conquer complex terrains with ease.
  • 【Robot Control Across Platforms】JetHexa provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetHexa to perform specific actions.
  • 【Driven by Al, powered by Jetson】 JetAuto is a high-performance educational robot developed for ROS learning scenarios. Equipped with Jetson Nano/Orin Nano/Orin NX controllers and compatible with both ROS1 and ROS2, it integrates deep learning frameworks with TensorRT acceleration, making it ideal for advanced Al applications such as SLAM and vision recognition.

  • 【SLAM Development and Diverse Configuration】JetAuto is equipped with a powerful combination of a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto, cartographer and RRT, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance. Using 3D vision, it can capture point cloud images of the environment to achieve RTAB 3D mapping navigation.

  • 【Empowered by Large Al Model, Human-Robot Interaction Redefined】 JetAuto deploys multimodal models with ChatGPT at its core, integrating 3D vision and a 6-microphone array. This synergy enhances its perception, reasoning, and actuation capabilities, enabling advanced embodied AI applications and delivering natural, context-aware human-robot interaction.

  • 【Robot Control Across Platforms】 JetAuto provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle and keyboard, allowing you to control the robot at will. By importing corresponding codes, you can command JetAuto to perform specific actions.

  • 【Comprehensive Learning Tutorials】 Through JetAuto's structured curriculum, master cutting-edge technologies including ROS development, SLAM mapping and navigation, 3D depth vision, OpenCV, YOLOv8, MediaPipe, Large Al model integration, Movelt and Gazebo simulation, and voice interaction.
    Supported by extensive documentation and video tutorials, our progressive learning system breaks down complex concepts into digestible modules, guiding you from fundamentals to advanced implementations-empowering you to build your own intelligent robotic systems.

  • 【Smart ROS Robots Driven by AI】 JetTank supports Robot Operating System (ROS). It leverages mainstream deep learning frameworks, incorporates MediaPipe development, enables YOLO model training. This combination delivers 3D machine vision applications, including autonomous driving, somatosensory interaction and KCF target tracking.
  • 【SLAM Development and Diverse Configuration】JetTank is equipped with a 3D depth camera and Lidar. It utilizes a wide range of advanced algorithms including gmapping, hector, karto and cartographer, enabling precise multi-point navigation, TEB path planning, and dynamic obstacle avoidance.
  • 【High-performance Hardware Configurations】JetTank is made of aluminum alloy and employs various hardware components, including reinforced nylon continuous track, 520 Hall encoder gear motors, metal drive wheel, Lidar, Astra Pro Plus depth camera, 6-microphone array, speaker, etc.
  • 【Far-field Voice Interaction】JetTank advanced kit incorporates a 6-microphone array and speaker allowing for man-robot interaction applications, including Text to Speech conversion, voice wake-up, 360° sound source localization, voice-controlled mapping navigation, etc.
  • 【Robot Control Across Platforms】JetTank provides multiple control methods, like WonderAi app (iOS&Android), wireless handle, Robot Operating System (ROS) and keyboard, allowing you to control the robot at will.
  • 【Driven by AI, Powered by NVIDIA Jetson Nano】JetMax is an open source AI robotic arm developed based on ROS. It is based on the Jetson Nano control system, supports Python programming, adopts mainstream deep learning frameworks, and can realize a variety of AI artificial intelligence applications.
  • 【AI Vision, Deep Learning】The end of JetMax is equipped with a high-definition camera, which can realize FPV video transmission. Image processing through OpenCV can recognize colors, faces, gestures, etc. Through deep learning, JetMax can realize image recognition and item handling.
  • 【Inverse Kinematics Algorithm】JetMax uses an inverse kinematics algorithm to accurately track, grab, sort and palletize target items in the field of view. Hiwonder will provide inverse kinematics analysis courses, connected coordinate system DH model and inverse kinematics function source code.
  • 【Multiple Expansion Methods】You can purchase additional McNamee wheel chassis or slide rails to expand your JetMax, expand the range of motion of JetMax, and do more interesting AI projects.
  • 【Detailed Course Materials and Professional After-sales Service】We provide 200+ courses and provide online technical assistance (China time) to help you learn JetMax more efficiently! Course content includes: introduction to the use of JetMax, ROS and OpenCV series courses, AI deep learning courses, inverse kinematics courses, action group editing teaching courses, and creative gameplay courses. Note: Hiwonder only provides technical assistance for existing courses, and more in-depth development needs to be completed by customers themselves.
  • 【Raspberry Pi Powered & ROS1/ROS2】 PuppyPi is a high-performance AI vision robot dog designed for AI education. It is equipped with the Raspberry Pi 5 and fully supports both ROS1 and ROS2 environments. With Python programming, PuppyPi offers efficient AI computation and a wide range of robotic applications. We provide access to all source code and detailed documentation to help you create your own AI robot dog!

  • 【AI Large Model Integration & Enhanced Human-Robot Interaction】 PuppyPi integrates a multimodal model, featuring ChatGPT at its core for advanced human-robot interaction. Combined with AI vision, it excels in perception, reasoning, and action, creating a more natural and flexible interaction experience!

  • 【High-Torque Smart Servos & Inverse Kinematics】 PuppyPi is equipped with 8 high-torque stainless steel gear servos, offering faster response times and stable output. The robot's legs use a link structure design combined with inverse kinematics algorithms to enable coordinated multi-joint movement and precise motion control.

  • 【AI Vision Recognition & Tracking】 PuppyPi features a high-definition camera that enables a variety of AI vision capabilities, including color recognition, target tracking, face detection, ball kicking, line following, and MediaPipe gesture control.

  • 【Lidar & Robotic Arm Expansion】 PuppyPi supports TOF Lidar and robotic arm expansion. It can perform 360° environmental scanning, SLAM navigation, and dynamic obstacle avoidance. Additionally, it can precisely grasp objects, opening up opportunities for advanced AI applications.

  • 【Omni-directional movement, first person view】The chassis is equipped with 4 high-performance encoder geared motors and 4 omni-directional mecanum wheels, ArmPi Pro can realize 360° movement. Combined with HD camera ending in robot arm, it can provide first person view.
  • 【Powerful Control System】RaspberryPi 4B/5 makes breakthrough in processor speed, multimedia performance, memory and connection. The combination of RaspberryPi 4B/5 and RaspberryPi expansion board significantly enhances ArmPi Pro's AI performance!
  • 【AI Vision Recognition, Target Tracking】ArmPi Pro takes OpenCV as image processing library and utilizes FPV camera to recognize and locate the target block so as to realize color sorting, target tracking, line following, and other AI games.
  • 【APP Control, FPV Transmitted Image】Android and iOS APP are available for robot remote control. Via the APP, you can control the robot in real time and switch various AI games just by one tap.
Hiwonder SpiderPi: AI Intelligent Visual Hexapod Robot Powered by Raspberry Pi 5 Hiwonder SpiderPi: AI Intelligent Visual Hexapod Robot Powered by Raspberry Pi 5
Save 10%
  • 【Raspberry Pi AI Robot】An ideal platform for conducting research in motion control for hexapod robots, machine vision, OpenCV, deep learning, and various other fields.
  • 【AI Vision for Infinite Creativity】Feature a wide-angle HD camera and use OpenCV for various AI vision applications such as transportation.
  • 【Support App Control】 Enjoy remote control of robot movements and access a live camera feed for first person view.
  • 【Inverse Kinematics, Various Gait Modes】SpiderPi utilizes inverse kinematics to enable a range of gait modes of tripod and quadruped gaits. And it offers adjustable height and speed, as well as the ability to make turns and change its motion direction.
Hiwonder TonyPi Pro AI Humanoid Robot with Raspberry Pi 5 – Integrated Multimodal AI Model (ChatGPT), AI Vision Tracking, Voice Interaction, and Hand-Eye Coordination Hiwonder TonyPi Pro AI Humanoid Robot with Raspberry Pi 5 – Integrated Multimodal AI Model (ChatGPT), AI Vision Tracking, Voice Interaction, and Hand-Eye Coordination
Save 2%
  • 【Al-Driven and Raspberry Pi Powered】 TonyPi Pro is a high-performance AI vision robot designed for AI education applications. It is powered by the Raspberry Pi 5, integrated with an OpenCV image processing library and robotic inverse kinematics algorithms. Offering open-source access, TonyPi Pro provides a flexible development environment that supports advanced AI robotics development.

  • 【AI Large Model Integration for Enhanced Human-Machine Interaction】 TonyPi Pro incorporates a multimodal model, with ChatGPT at the core of its interaction system. Coupled with AI vision and voice interaction, TonyPi Pro excels in perception, reasoning, and action, enabling advanced embodied AI applications and delivering a seamless, intuitive human-machine interaction experience!

  • 【High-Voltage Intelligent Bus Servos】 Equipped with 18 high-voltage intelligent bus servos, TonyPi Pro offers rapid response times and stable output, enabling precise multi-joint coordination and complex motion control. This ensures accurate humanoid postures and interactive movements to meet a variety of demands.

  • 【Upgraded Hand-Eye Coordination & Dynamic Gait】 TonyPi Pro features enhanced open-close robotic hands and AI-powered vision, allowing it to grasp and transport objects with greater flexibility. Its advanced gait system supports a wider range of motion, enabling tasks like autonomous hurdle-crossing and stair climbing—ideal for exploring creative AI applications.

  • 【Comprehensive Learning Resources】 TonyPi offers a rich array of educational content, including resources on robotic motion control, OpenCV, deep learning, MediaPipe, AI large models, voice interaction, and sensor applications. We provide extensive learning materials and video tutorials to guide you from foundational concepts to advanced practices, helping you develop your own AI humanoid robot.

Light
Dark