Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    STIV: Scalable Textual content and Picture Conditioned Video Era

    July 31, 2025

    This robotic makes use of Japanese custom and AI for sashimi that lasts longer and is extra humane

    July 31, 2025

    AI Now Weaves Yarn Desires into Digital Artwork

    July 31, 2025
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Robotics»Interview with Kate Candon: Leveraging express and implicit suggestions in human-robot interactions
    Robotics

    Interview with Kate Candon: Leveraging express and implicit suggestions in human-robot interactions

    Arjun PatelBy Arjun PatelJuly 30, 2025No Comments14 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Interview with Kate Candon: Leveraging express and implicit suggestions in human-robot interactions
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    On this interview collection, we’re assembly a few of the AAAI/SIGAI Doctoral Consortium individuals to seek out out extra about their analysis. Kate Candon is a PhD pupil at Yale College all in favour of understanding how we will create interactive brokers which can be extra successfully in a position to assist individuals. We spoke to Kate to seek out out extra about how she is leveraging express and implicit suggestions in human-robot interactions.

    May you begin by giving us a fast introduction to the subject of your analysis?

    I examine human-robot interplay. Particularly I’m all in favour of how we will get robots to higher be taught from people in the best way that they naturally educate. Usually, numerous work in robotic studying is with a human trainer who is simply tasked with giving express suggestions to the robotic, however they’re not essentially engaged within the activity. So, for instance, you might need a button for “good job” and “dangerous job”. However we all know that people give numerous different alerts, issues like facial expressions and reactions to what the robotic’s doing, possibly gestures like scratching their head. It may even be one thing like shifting an object to the aspect {that a} robotic arms them – that’s implicitly saying that that was the unsuitable factor at hand them at the moment, as a result of they’re not utilizing it proper now. These implicit cues are trickier, they want interpretation. Nevertheless, they’re a option to get further info with out including any burden to the human person. Up to now, I’ve checked out these two streams (implicit and express suggestions) individually, however my present and future analysis is about combining them collectively. Proper now, we have now a framework, which we’re engaged on bettering, the place we will mix the implicit and express suggestions.

    By way of selecting up on the implicit suggestions, how are you doing that, what’s the mechanism? As a result of it sounds extremely tough.

    It may be actually arduous to interpret implicit cues. Individuals will reply in another way, from individual to individual, tradition to tradition, and many others. And so it’s arduous to know precisely which facial response means good versus which facial response means dangerous.

    So proper now, the primary model of our framework is simply utilizing human actions. Seeing what the human is doing within the activity can provide clues about what the robotic ought to do. They’ve totally different motion areas, however we will discover an abstraction in order that we will know that if a human does an motion, what the same actions could be that the robotic can do. That’s the implicit suggestions proper now. After which, this summer season, we wish to prolong that to utilizing visible cues and facial reactions and gestures.

    So what sort of eventualities have you ever been type of testing it on?

    For our present venture, we use a pizza making setup. Personally I actually like cooking for example as a result of it’s a setting the place it’s straightforward to think about why this stuff would matter. I additionally like that cooking has this aspect of recipes and there’s a formulation, however there’s additionally room for private preferences. For instance, any individual likes to place their cheese on prime of the pizza, so it will get actually crispy, whereas different individuals prefer to put it underneath the meat and veggies, in order that possibly it’s extra melty as a substitute of crispy. And even, some individuals clear up as they go versus others who wait till the tip to take care of all of the dishes. One other factor that I’m actually enthusiastic about is that cooking will be social. Proper now, we’re simply working in dyadic human-robot interactions the place it’s one particular person and one robotic, however one other extension that we wish to work on within the coming yr is extending this to group interactions. So if we have now a number of individuals, possibly the robotic can be taught not solely from the particular person reacting to the robotic, but in addition be taught from an individual reacting to a different particular person and extrapolating what that may imply for them within the collaboration.

    May you say a bit about how the work that you just did earlier in your PhD has led you up to now?

    After I first began my PhD, I used to be actually all in favour of implicit suggestions. And I assumed that I wished to concentrate on studying solely from implicit suggestions. Certainly one of my present lab mates was targeted on the EMPATHIC framework, and was wanting into studying from implicit human suggestions, and I actually appreciated that work and thought it was the course that I wished to enter.

    Nevertheless, that first summer season of my PhD it was throughout COVID and so we couldn’t actually have individuals come into the lab to work together with robots. And so as a substitute I did a web-based examine the place I had individuals play a sport with a robotic. We recorded their face whereas they had been enjoying the sport, after which we tried to see if we may predict primarily based on simply facial reactions, gaze, and head orientation if we may predict what behaviors they most well-liked for the agent that they had been enjoying with within the sport. We really discovered that we may decently effectively predict which of the behaviors they most well-liked.

    The factor that was actually cool was we discovered how a lot context issues. And I believe that is one thing that’s actually essential for going from only a solely teacher-learner paradigm to a collaboration – context actually issues. What we discovered is that typically individuals would have actually large reactions but it surely wasn’t essentially to what the agent was doing, it was to one thing that that they had carried out within the sport. For instance, there’s this clip that I all the time use in talks about this. This particular person’s enjoying and he or she has this actually noticeably confused, upset look. And so at first you may suppose that’s destructive suggestions, regardless of the robotic did, the robotic shouldn’t have carried out that. However in case you really have a look at the context, we see that it was the primary time that she misplaced a life on this sport. For the sport we made a multiplayer model of House Invaders, and he or she received hit by one of many aliens and her spaceship disappeared. And so primarily based on the context, when a human seems at that, we really say she was simply confused about what occurred to her. We wish to filter that out and never really think about that when reasoning concerning the human’s habits. I believe that was actually thrilling. After that, we realized that utilizing implicit suggestions solely was simply so arduous. That’s why I’ve taken this pivot, and now I’m extra all in favour of combining the implicit and express suggestions collectively.

    You talked about the specific aspect could be extra binary, like good suggestions, dangerous suggestions. Would the person-in-the-loop press a button or would the suggestions be given by speech?

    Proper now we simply have a button for good job, dangerous job. In an HRI paper we checked out express suggestions solely. We had the identical house invaders sport, however we had individuals come into the lab and we had a bit Nao robotic, a bit humanoid robotic, sitting on the desk subsequent to them enjoying the sport. We made it in order that the particular person may give optimistic or destructive suggestions in the course of the sport to the robotic in order that it will hopefully be taught higher serving to habits within the collaboration. However we discovered that folks wouldn’t really give that a lot suggestions as a result of they had been targeted on simply attempting to play the sport.

    And so on this work we checked out whether or not there are other ways we will remind the particular person to present suggestions. You don’t wish to be doing it on a regular basis as a result of it’ll annoy the particular person and possibly make them worse on the sport in case you’re distracting them. And in addition you don’t essentially all the time need suggestions, you simply need it at helpful factors. The 2 circumstances we checked out had been: 1) ought to the robotic remind somebody to present suggestions earlier than or after they fight a brand new habits? 2) ought to they use an “I” versus “we” framing? For instance, “bear in mind to present suggestions so I generally is a higher teammate” versus “bear in mind to present suggestions so we generally is a higher staff”, issues like that. And we discovered that the “we” framing didn’t really make individuals give extra suggestions, but it surely made them really feel higher concerning the suggestions they gave. They felt prefer it was extra useful, type of a camaraderie constructing. And that was solely express suggestions, however we wish to see now if we mix that with a response from somebody, possibly that time could be time to ask for that express suggestions.

    You’ve already touched on this however may you inform us concerning the future steps you may have deliberate for the venture?

    The large factor motivating numerous my work is that I wish to make it simpler for robots to adapt to people with these subjective preferences. I believe when it comes to goal issues, like with the ability to decide one thing up and transfer it from right here to right here, we’ll get to some extent the place robots are fairly good. Nevertheless it’s these subjective preferences which can be thrilling. For instance, I like to prepare dinner, and so I need the robotic to not do an excessive amount of, simply to possibly do my dishes while I’m cooking. However somebody who hates to prepare dinner may need the robotic to do the entire cooking. These are issues that, even in case you have the proper robotic, it could actually’t essentially know these issues. And so it has to have the ability to adapt. And numerous the present desire studying work is so knowledge hungry that you need to work together with it tons and tons of occasions for it to have the ability to be taught. And I simply don’t suppose that that’s real looking for individuals to truly have a robotic within the house. If after three days you’re nonetheless telling it “no, whenever you assist me clear up the lounge, the blankets go on the sofa not the chair” or one thing, you’re going to cease utilizing the robotic. I’m hoping that this mixture of express and implicit suggestions will assist it’s extra naturalistic. You don’t need to essentially know precisely the fitting option to give express suggestions to get the robotic to do what you need it to do. Hopefully by all of those totally different alerts, the robotic will have the ability to hone in a bit bit quicker.

    I believe an enormous future step (that’s not essentially within the close to future) is incorporating language. It’s very thrilling with how giant language fashions have gotten so a lot better, but in addition there’s numerous fascinating questions. Up till now, I haven’t actually included pure language. A part of it’s as a result of I’m not absolutely positive the place it suits within the implicit versus express delineation. On the one hand, you’ll be able to say “good job robotic”, however the best way you say it could actually imply various things – the tone is essential. For instance, in case you say it with a sarcastic tone, it doesn’t essentially imply that the robotic really did job. So, language doesn’t match neatly into one of many buckets, and I’m all in favour of future work to suppose extra about that. I believe it’s a brilliant wealthy house, and it’s a means for people to be way more granular and particular of their suggestions in a pure means.

    What was it that impressed you to enter this space then?

    Truthfully, it was a bit unintentional. I studied math and pc science in undergrad. After that, I labored in consulting for a few years after which within the public healthcare sector, for the Massachusetts Medicaid workplace. I made a decision I wished to return to academia and to get into AI. On the time, I wished to mix AI with healthcare, so I used to be initially excited about scientific machine studying. I’m at Yale, and there was just one particular person on the time doing that, so I used to be the remainder of the division after which I discovered Scaz (Brian Scassellati) who does numerous work with robots for individuals with autism and is now shifting extra into robots for individuals with behavioral well being challenges, issues like dementia or anxiousness. I assumed his work was tremendous fascinating. I didn’t even understand that that type of work was an choice. He was working with Marynel Vázquez, a professor at Yale who was additionally doing human-robot interplay. She didn’t have any healthcare initiatives, however I interviewed along with her and the questions that she was excited about had been precisely what I wished to work on. I additionally actually wished to work along with her. So, I unintentionally stumbled into it, however I really feel very grateful as a result of I believe it’s a means higher match for me than the scientific machine studying would have essentially been. It combines numerous what I’m all in favour of, and I additionally really feel it permits me to flex backwards and forwards between the mathy, extra technical work, however then there’s additionally the human aspect, which can be tremendous fascinating and thrilling to me.

    Have you ever received any recommendation you’d give to somebody considering of doing a PhD within the area? Your perspective can be notably fascinating since you’ve labored exterior of academia after which come again to start out your PhD.

    One factor is that, I imply it’s type of cliche, but it surely’s not too late to start out. I used to be hesitant as a result of I’d been out of the sphere for some time, however I believe if you will discover the fitting mentor, it may be a very good expertise. I believe the largest factor is discovering advisor who you suppose is engaged on fascinating questions, but in addition somebody that you just wish to be taught from. I really feel very fortunate with Marynel, she’s been a wonderful advisor. I’ve labored fairly intently with Scaz as effectively they usually each foster this pleasure concerning the work, but in addition care about me as an individual. I’m not only a cog within the analysis machine.

    The opposite factor I’d say is to discover a lab the place you may have flexibility in case your pursuits change, as a result of it’s a very long time to be engaged on a set of initiatives.

    For our last query, have you ever received an fascinating non-AI associated truth about you?

    My foremost summertime interest is enjoying golf. My complete household is into it – for my grandma’s a centesimal birthday celebration we had a household golf outing the place we had about 40 of us {golfing}. And really, that summer season, when my grandma was 99, she had a par on one of many par threes – she’s my {golfing} function mannequin!

    About Kate

    Kate Candon is a PhD candidate at Yale College within the Laptop Science Division, suggested by Professor Marynel Vázquez. She research human-robot interplay, and is especially all in favour of enabling robots to higher be taught from pure human suggestions in order that they will turn into higher collaborators. She was chosen for the AAMAS Doctoral Consortium in 2023 and HRI Pioneers in 2024. Earlier than beginning in human-robot interplay, she obtained her B.S. in Arithmetic with Laptop Science from MIT after which labored in consulting and in authorities healthcare.




    AIhub
    is a non-profit devoted to connecting the AI group to the general public by offering free, high-quality info in AI.


    AIhub
    is a non-profit devoted to connecting the AI group to the general public by offering free, high-quality info in AI.



    Lucy Smith
    is Managing Editor for AIhub.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Arjun Patel
    • Website

    Related Posts

    This robotic makes use of Japanese custom and AI for sashimi that lasts longer and is extra humane

    July 31, 2025

    Robotic Digicam Tripod | Roboticmagazine

    July 31, 2025

    Skild AI Offers First Take a look at Its Basic-Objective Robotic Mind

    July 30, 2025
    Top Posts

    STIV: Scalable Textual content and Picture Conditioned Video Era

    July 31, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    STIV: Scalable Textual content and Picture Conditioned Video Era

    By Oliver ChambersJuly 31, 2025

    The sector of video technology has made exceptional developments, but there stays a urgent want…

    This robotic makes use of Japanese custom and AI for sashimi that lasts longer and is extra humane

    July 31, 2025

    AI Now Weaves Yarn Desires into Digital Artwork

    July 31, 2025

    What’s Actually Coming for Your Digital Defenses

    July 31, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.