Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Qilin Ransomware Makes use of TPwSav.sys Driver to Bypass EDR Safety Measures

    July 30, 2025

    Why I like to recommend this Bluetooth tracker to each iPhone and Android customers over AirTags

    July 30, 2025

    How Octopus Power used tradition to achieve the highest

    July 30, 2025
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Robotics»System learns after watching how-to movies
    Robotics

    System learns after watching how-to movies

    Arjun PatelBy Arjun PatelApril 22, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    System learns after watching how-to movies
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Kushal Kedia (left), a doctoral pupil within the discipline of pc science, and Prithwish Dan are members of the event staff behind RHyME, a system that enables robots to be taught duties by watching a single how-to video. Credit score: Louis DiPietro

    Cornell College researchers have developed a brand new robotic framework powered by synthetic intelligence—known as RHyME (Retrieval for Hybrid Imitation beneath Mismatched Execution)—that enables robots to be taught duties by watching a single how-to video.

    Robots might be finicky learners. Traditionally, they’ve required exact, step-by-step instructions to finish primary duties, and have a tendency to name it quits when issues go off-script, like after dropping a device or shedding a screw. RHyME, nevertheless, may fast-track the event and deployment of robotic programs by considerably decreasing the time, vitality and cash wanted to coach them, the researchers stated.

    “One of many annoying issues about working with robots is accumulating a lot information on the robotic doing completely different duties,” stated Kushal Kedia, a doctoral pupil within the discipline of pc science. “That is not how people do duties. We have a look at different individuals as inspiration.”

    Kedia will current a paper titled “One-Shot Imitation beneath Mismatched Execution,” in Could on the Institute of Electrical and Electronics Engineers’ Worldwide Convention on Robotics and Automation, in Atlanta. The work can also be obtainable on the arXiv preprint server.

    House robotic assistants are nonetheless a good distance off as a result of they lack the wits to navigate the bodily world and its numerous contingencies. To get robots up to the mark, researchers like Kedia are coaching them with what quantities to how-to movies—human demonstrations of varied duties in a lab setting. The hope of this strategy, a department of machine studying known as “imitation studying,” is that robots will be taught a sequence of duties quicker and have the ability to adapt to real-world environments.







    Credit score: Cornell College

    “Our work is like translating French to English—we’re translating any given activity from human to robotic,” stated senior writer Sanjiban Choudhury, assistant professor of pc science.

    This translation activity nonetheless faces a broader problem, nevertheless: People transfer too fluidly for a robotic to trace and mimic, and coaching robots with video requires gobs of it. Additional, video demonstrations—of, say, choosing up a serviette or stacking dinner plates—have to be carried out slowly and flawlessly, since any mismatch in actions between the video and the robotic has traditionally spelled doom for robotic studying, the researchers stated.

    “If a human strikes in a manner that is any completely different from how a robotic strikes, the tactic instantly falls aside,” Choudhury stated. “Our pondering was, ‘Can we discover a principled option to cope with this mismatch between how people and robots do duties?'”

    RHyME is the staff’s reply—a scalable strategy that makes robots much less finicky and extra adaptive. It supercharges a robotic system to make use of its personal reminiscence and join the dots when performing duties it has seen solely as soon as by drawing on movies it has seen. For instance, a RHyME-equipped robotic proven a video of a human fetching a mug from the counter and inserting it in a close-by sink will comb its financial institution of movies and draw inspiration from comparable actions—like greedy a cup and decreasing a utensil.

    RHyME paves the best way for robots to be taught multiple-step sequences whereas considerably decreasing the quantity of robotic information wanted for coaching, the researchers stated. RHyME requires simply half-hour of robotic information; in a lab setting, robots educated utilizing the system achieved a greater than 50% enhance in activity success in comparison with earlier strategies, the researchers stated.

    “This work is a departure from how robots are programmed at present. The established order of programming robots is 1000’s of hours of tele-operation to show the robotic learn how to do duties. That is simply unimaginable,” Choudhury stated. “With RHyME, we’re shifting away from that and studying to coach robots in a extra scalable manner.”

    Together with Kedia and Choudhury, the paper’s authors are Prithwish Dan, Angela Chao, and Maximus Tempo.

    Extra data:
    Kushal Kedia et al, One-Shot Imitation beneath Mismatched Execution, arXiv (2024). DOI: 10.48550/arxiv.2409.06615

    Journal data:
    arXiv


    Supplied by
    Cornell College


    Quotation:
    Robotic see, robotic do: System learns after watching how-to movies (2025, April 22)
    retrieved 22 April 2025
    from https://techxplore.com/information/2025-04-robot-videos.html

    This doc is topic to copyright. Other than any truthful dealing for the aim of personal examine or analysis, no
    half could also be reproduced with out the written permission. The content material is supplied for data functions solely.



    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Arjun Patel
    • Website

    Related Posts

    Interview with Kate Candon: Leveraging express and implicit suggestions in human-robot interactions

    July 30, 2025

    Skild AI Offers First Take a look at Its Basic-Objective Robotic Mind

    July 30, 2025

    NVIDIA VP Deepu Talla to debate bodily AI at RoboBusiness

    July 30, 2025
    Top Posts

    Qilin Ransomware Makes use of TPwSav.sys Driver to Bypass EDR Safety Measures

    July 30, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    Qilin Ransomware Makes use of TPwSav.sys Driver to Bypass EDR Safety Measures

    By Declan MurphyJuly 30, 2025

    Cybercriminals affiliated with the Qilin ransomware-as-a-service (RaaS) operation have demonstrated superior evasion methods by exploiting…

    Why I like to recommend this Bluetooth tracker to each iPhone and Android customers over AirTags

    July 30, 2025

    How Octopus Power used tradition to achieve the highest

    July 30, 2025

    Interview with Kate Candon: Leveraging express and implicit suggestions in human-robot interactions

    July 30, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.