- 1 About Radhika Nagpal
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Introduction
- 4.2 MIT Education
- 4.3 Bell Labs
- 4.4 Doctoral Studies at MIT
- 4.5 Faculty Career and Robotics Research
- 4.6 Robotics and Mindstorm Projects
- 4.7 Robotics and Kilobot Research
- 4.8 Graduate and Post Doc Students
- 4.9 Advice for Young People
About Radhika Nagpal
Radhika Nagpal is the Fred Kavli Professor of Computer Science, School of Engineering and Applied Sciences, Wyss Institute for Biologically Inspired Engineering, Harvard University. She was born in the United States while her father studied for his doctorate at Georgia Tech and grew up in Amritsar, India. Nagpal received her S.B. and S.M. in Electrical Engineering and Computer Science from MIT in 1994 and her Ph.D. in Electrical Engineering and Computer Science, also from MIT, in 2001. She completed a dissertation titled “Programmable Self-Assembly using Biologically-Inspired Local Interactions.”
Nagpal is known for her work in biologically inspired multi agent systems, including swarm robotics and bio inspired robot design, decentralized collective algorithms, and global to local abstraction as well as Biological Multi-Agent systems, including models of multicellular morphogenesis and collective insect behavior.
Nagpal has received many awards and honors including; the National Talent Search Scholarship Award, India, 1987; AT&T Bell Labs Graduate Fellowship Award for Women (GRPW), 1995-2001; Microsoft New Facility Fellowship, 2005; NSF Career Award, 2007; Anita Borg Early Career Award (BECA), 2010; Radcliffe Institute Fellowship Award, (2012; McDonald Award for Excellence in Mentoring and Advising, Harvard, 2015; Science Top 10 Breakthroughs, Science, 2014; Natre 10 Award: Top ten scientists and engineers who mattered, Nature, Dec. 2014; TED Speaker, Annual TED Conference, Vancouver, Apr. 2017
About the Interview
RADHIKA NAGPAL: An Interview Conducted by Peter Asaro, IEEE History Center, 24 Jan 2015
Interview # 809 for Indiana University and the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or firstname.lastname@example.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Šabanović, email@example.com.
It is recommended that this oral history be cited as follows:
Radhika Nagpal, an oral history conducted in 2015 by Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.
Interviewee: Radhika Nagpal
Interviewer: Peter Asaro
Date: 24 Jan. 2015
Place: College Station, Texas
If you could start by telling us your name, where you were born and grew up, and where went to school.
My name is Radhika Nagpal. I grew up in Amritsar in India, but I was actually born in the US. My father did his Ph.D. at Georgia Tech. When I was eight, our whole family moved back to India and I didn't come back to the US until undergraduate, which I did at MIT.
How did you decide to go to MIT?
What did you study there?
Well, at the time, Amritsar had a lot of turmoil, there was a lot of terrorist activity, and the city would sort of shut down. We used to have curfew at 7:00. So I think, as a high school student, my goal was to get as far away [laughs] from home as I could. I applied to a bunch of schools in the US, figuring that was far and figuring that I could potentially go there. But I didn't know a whole lot about any of the schools and there wasn't the Internet. We would mail and they would send us little brochures, the brochures was basically all I knew, and my father knew a bunch about the universities. I applied to five or six universities and .I decided to go to MIT for a couple of reasons. One of which was that, of course, MIT is really well known for being a great engineering school. But also, this was 1989, and MIT had a great program where they paid for your tuition if you couldn't pay for yourself. As long as you were admitted, they figured out how to cover you. My parents were pretty poor at that time, so it was a really big deal that they could do that and the other universities couldn't. I got to go to college [laughs] thanks to that program.
What did you major in?
I didn't know what I was going to major in, actually. I went in thinking I would do electrical engineering, but mostly I knew that I didn't want to do biology, I didn't want to do mechanical engineering, and I didn't know anything else. [Laughs] My first computer science class, six double "O" one, and that was it. Then, I think, six double "O" four, which is the computer architecture class. So programming and making computers and I was hooked.
How did you wind up getting interested in robotics?
That's a pretty long path. I was initially interested in computer architecture and from that I got interested in networking and distributed systems and parallel computing. And I got a chance to work at Bell Labs and that was really fun. That's actually where I got interested in research. That was the really big eye opener for me, for what it meant to do research, how people had fun doing research. [Laughs] I told them, "I want this job." They said, "Go get a Ph.D."
When I joined for a Ph.D., I sort of floundered a little bit. I wasn't sure what I wanted to do. A lot of topics that I thought I understood were really different from what I had predicted. There was a new project that just started at that time called amorphous computing. It was Jerry Sussman, Tom Knight, and Hal Abelson. The idea was that we would learn from biology as a way to think about programming systems that had huge number of parts. If you think about a distributed system, like a network, they're saying, cells are a network or ants are a network or physics is a network of molecules, and if you could compute with large numbers, you would think really different about computation.
They published this white paper and it was so exciting. A bunch of people flocked to that group [laughs] leaving other groups. Actually, the area called synthetic biology grew out of that, as well. It was not only would we learn from biology, we would program cells the way we program computers. That was kind of the idea. It was a really, really exciting time, and I think that in the beginning, we thought, "What would be a physical instantiation of this idea?" Putting sensor networks was one example. Modular robots that are kind of made of different modules, is another example. Then, people would talk about, smart dust. It was a super exciting time.
A lot of different fields were, sort of, pushing in this area of making really cheap individuals. But even then, I didn't want to do robotics, and one of the reasons was it was really hard and it was really expensive to own a robot.
Computation was really hard, cameras were really poor, and by the time I graduated, it was still super hard to do robotics because all of the processing and information and sensing was very different. But as I started as a faculty, maybe three, four years down, 3D printers started coming online. Just so many different ways you could actually build robots. You could put kits together. The cost went down and then, I thought, "Oh, I don't have to write a grant, then." [Laughs] I thought, "Oh, you know, if I wanted ten robots, that would cost me a huge amount of money," but as soon as that cost went down, I got excited about owning robots. I was, maybe, my third or fourth year in faculty when I really started doing robotics. Not just talking about, not just doing the theoretical side, but actually owning robots and building robots. Now we have a thousand robots in my lab. [Laughs] It was a very quick trajectory from nothing to a thousand.
What year were you at Bell Labs? Who did you work with there?
I was at Bell Labs in 19—[laughs] this might be tougher. You might have to verify this. I first went there 1992? No, 1993 and 1994. MIT had a program where you could partner with a company and you would do your master's with the company. You would spend a whole semester there as part of your five-year program. I spent a semester there and then I spent a winter there, and then I deferred grad school and spent a year there [laughs] because it was a lot of fun.
I worked most closely with Rae McLellan, who works in computer architecture. I was actually right next to the Unix room, so I got to meet Brian Kernighan, Dennis Ritchie, Ken Thompson, and Dave Presotto. It was this whole group in computer science that, especially at that point, C was the only language we used, and Bjarne Stroustrup and C++ was sort of starting to happen, and so it was just this illustrious group and they would all go to lunch together. I could go to lunch with them and they'd talk about their hobbies and they'd talk about their passions and then, we'd go and we'd work, and it was great.
At Bell Labs, there were not that many women and there were not that many students, so I got treated like royalty. [Laughs] It was really exciting to have all these super famous people, who still know me and still check up on me every so often to ask how I'm doing. For me, it was a really a major experience to be at Bell Labs.
Doctoral Studies at MIT
For your Ph.D., what was your thesis topic and who was your advisor?
My thesis topic was with Gerry Sussman and Hal Abelson. My thesis topic was really about how you could take an idea of what you want a collective to do-- I mean, this is actually, maybe my first example of being able to do this. Since then, a lot of my research is built around this idea that if you have a collective of individuals and they all have simple local rules, what you can't do is design the rules bottom up because you're just going to be stuck trying to see every variation of what goes on, and if you look at what an individual is doing, it's not well connected with the global one. So is there a way to go inverse? Is it possible to write a compiler where I say, "Well, what I want the group to achieve is this?" Then, the computer sort of figures out what it is that all of the individuals should do. The program has to somehow deal with the fact that some people may not, some of your robots, may not work or some of them might get lost, or you don't know exactly if there were 1,000 or 1,200; right? At that number, you don't want to be counting any more. Traditional planning thinks about maybe smaller groups that, as an individual, what can each of us do and how will we coordinate together? We're thinking you have a bunch of identical individuals and they're not able to coordinate well, but you still want to achieve something, so you can up the number, but you can't make them more predictable. In my thesis, I actually was thinking about a programmable material that might've been made out of many different actuators that would fold. What's interesting is there are now folding robots. [Laughs] For me, that was an idea and a way of thinking about a new sort of active material or active environments.
This was a lot of computation would be embedded in everything. That was kind of the idea. If it was embedded in everything, then it's programming. It means you can't be rebooting individual bits of everything. It has to be sort of self-managing in some way. I was thinking about active structures, but the real gist of what I did was show that you could take very complicated ideas globally and systematically compile them.
Who else did you work with or interact with while you were a graduate student?
I think it was a pretty active time. Obviously, there was all the synthetic biology stuff going on so Tom Knight was sort of one of the-- The iGEM [International Genetically Engineered Machine] started around that time, which is these huge competitions where people come and program cells. I was part of the first set. We were maybe twenty of us, and now, the competition has 800, 900 people. It's huge. It was a huge growth.
The other group is, of course, the AI Lab. I was always connected with people in robotics, so Holly Yanco-- actually, all of Rod Brooks' group. [Laughs] Just because what I was interested in is very-- What I'm interested in now is very closely related to embodied intelligence, the idea that complexity that you see isn't necessarily arising from complex decisions and complex thinking at the level of the robot. It's arising from the interactions with the environment or the interactions with others and that that's what you want. You don't want to make a more and more complicated robot. You want to make a simpler and simpler robot. His group had a lot. In fact, I read, I remember, for one of my-- they have something called an area exam, where you're supposed to read papers. I remember that Rod Brooks assigned me Lynn Parker's earlier papers and Maja Mataric’s early papers, so that's sort of how I got to know them. Cynthia Brezeal was a graduate student there, Holly Yanco was a graduate student there, and Brian Scassellati was a graduate student there. There's a huge number of people from that group that are now faculty in different places, in robotics, but also in other areas. But very much, like, the embodied intelligence kind of area was a big connection for me.
When did you finish your Ph.D.?
Where did you go after that?
Faculty Career and Robotics Research
At the time, 2001, was an interesting year. [Laughs] My entire game plan was to go back to Bell Labs. It was very clear to me. I'd come to get a Ph.D. I just hoped that after I did all this crazy stuff for my Ph.D., they would still hire me. But in 2001, all of the research labs were tanking. Xerox Park, which was my other favorite place because they did modular robots, was in trouble, DEC was in trouble, and everything was in trouble. Microsoft was just starting and Google didn't really-- I mean, Google existed, but the research lab didn't exist. Suddenly, research labs were just not an option anymore, and I also had my daughter when I was a grad student. I had a little kid and I was finishing my Ph.D. It was like this is so complicated, I just wanted to defend [laughs] then, I'll think about what I'm going to do with my life.
At the same time, they started these lecture positions at MIT because a lot of faculty were on leave starting startups. I decided that I'd be a lecturer for a couple of years, which basically was a term-limited thing, as a way of seeing if I liked being faculty. It seemed like a good trial. I'd never really tried being a teacher before, and I really liked it. I did research and I also taught students, so it was like getting a little taste of what it would feel like to be faculty.
At the end of that, I applied for faculty positions, and I came to Harvard with one more detour. Actually, my career is defined by detours. [Laughs] Wherever it's, “This is the path," I take a little short deviation, come back, and then a little short deviation. My second short deviation after the lecturer position was to spend a year in a biology lab, and this was super fun. This was maybe the second super influential thing for me because systems biology and synthetic biology was also rising at the same time, and they were really interested in collective behavior. They were open enough to want to start conversations with computer scientists, mathematicians, physicists, and try to start bringing everyone together to think about important problems in biology.
When I talked to them about active materials unfolding, many of the ideas had been taken from developmental biology, so we connected. The new department chair there, Mark Kirschner said, "Well, why don't you come spend a year here?" He said, "When you become faculty, you'll never have time again. But if you come for a year here, you'll influence everyone. We'll influence you. You can do some experiments." I actually spent a year where I say, "I tried," I tried very hard [laughs] to do actual experiments on the same organisms that I had read papers about. It was an eye opening experience. I mean, it was such an incredible-- That group still is super connected because even though all of us now are much more senior and sort of established in our places, there's actually a lot of connection still between how we see groups working together, how cells work together. This August, one of the people who was sort of a mentor for me then, we went and taught his group how to program kilobots. We had twenty biologists programming robots and I thought “this is just heaven." [Laughs]
[During that year,] I tried to do experiments and I learned a lot about how biologists think about this problem, and also, how hard it is to reverse engineer a real system that is robust, is doing all the things you care about but now, you have to sort of infer back what's going on. There's just limited tools you have to sort of ask that question. Whereas with robotics, we build it and then we still see things we didn't predict. But at least you can now start to work from first principles what happened, whereas that's much harder to do in biology. But that discovery process is very, very similar.
You went into the computer science department?
The computer science department, yes.
Were there other people doing robotics there at that time?
There were not. [Laughs] It was kind of-- so Harvard has a small group, but one of the defining features of the group, especially when I joined, was that there's a lot of interdisciplinary people. David Parkes, works in economics in CS; Barbara Groszs, who's been well known in AI for a long time, works on human computer interaction; and Stuart Shieber works on computational linguistics. When I said, "Oh, I want to work on computation and biology," they were like, "Great." That was not the reaction I got from a lot of universities I went to.
Whatever I did was weird whereas here, what I did seemed normal, in a sense. Early on, we started doing some robotics. My first robots were actually built off of Mindstorms or even, literally, the older Mindstorms, so Lego robots were my first robots. I think one advantage of not having a roboticist there was that I didn't have to be too embarrassed about it. There was-- actually, I take that back.Rob Howe was there, he does hands, and he also did a lot of surgical robots. There was a lot of distance then between what he did and what I did. I didn't even consider myself a roboticist. I considered myself an AI. Over time, there's now a lot of people. There's Rob Wood, who does insect scale robots; there's Conner Walsh, who does exoskeletons; and Rob Howe. Now, suddenly, we have this robotics group whereas before, when I joined, it was really sort of me alone trying to navigate my way.
Of course, I wasn't that far from MIT so I could always go back and [laughs] I have a huge network to ask questions. Holly Yanco and James McLurkin especially did a lot of swarm robotics at MIT at the time. When I started my faculty position and when I taught swarm robotics, he would bring over his robots and teach a class. In the process, he really taught me a lot about what is important in thinking about robots, what things bog you down, [and] what things are important when you think of designing them. In fact, many of the things that he wrote about and talked about influenced what we did with kilobots because he was the one who used to do swarm robotics and he had so many lessons that we learned from. It's really these interactions. You sometimes have interactions because you're friends. [Laughs] James is friends with both me and my husband, and we just had things in common. We would talk all the time and I thought, "I will never have robots. You can just bring your robots.” Now it’s sort of the opposite. We all have lots of robots and it’s great. Now, we talk about where we are going with our robots.
Robotics and Mindstorm Projects
What was the first project you did with Mindstorms?
In one of the projects that I've been interested in since the beginning of my faculty position is self-assembly, but self-assembly where robots are building something. Inspired by how termites build mounds, we did a lot of theoretical stuff on that. Then our first implementation was robots moving around tiles. The idea was you needed only a few simple sensors and local ideas to do it. We thought, "Well, we should be able to implement it with something as simple as the Lego Mindstorms robots. It was sort of also a proof, right? If you say these are simple robots, well, how simple is simple? At that time, the nice part especially about using the Mindstorms was that, of course, you could change the body.
I found that a lot of things that interest me in robotics involved the body of the robot. You can't just go somewhere and say, "I want a robot that has this design and this arm and is positioned this." You just get a robot, like a pioneer, or something that can’t move around and look. I always wanted something that could move around and manipulate or climb, and those were always in the research category. Those were never things that were easy to buy, but with Legos, you could kind of build whatever you needed to build, so we basically built the custom robot that we needed. It was very complicated [laughs] and it used two computers, basically, or two of the Mindstorms bricks, and it has a lot of parts. But it had an arm that can move up and down and it had a gripper that could close and it had lots of touch sensors and vision sensors, and so we could actually implement the whole algorithms just in 2D. For me, 3D printers were an amazing enabler.
Now, I have a lot more students in my lab who can imagine things and then they just happen. We don't have to imagine things and then deal with the difficulty of trying to make that thing and the fact that it might be too heavy or it might be difficult to machine or it might take too much time. Or maybe the student doesn't have the physical skills. Even though they have the mental creativity, they don't have physical skills to do it. Those used to stop a lot of our projects before. So we would just stick with the Legos. And now, literally, students come and I'm, like, "Well, let's imagine that you wanted to make an army ant that was crawling on top of army ants and making a tower. How would you go about-- what would be the design of that robot? What would that robot need to know? What would it think? What sensors?" Literally, we can compose those ideas so fast, so you're no longer constrained. And with 3D printers, you can also make many, which is another sort of fun thing. You can go through revisions. But at the end, if you have a good design, you don't have one robot. You can have thirty robots and that's a really big enabler, I think.
Robotics and Kilobot Research
How did that research trajectory lead you to the kilobots?
[Laughs] Well, there's more and more and more, right? James McLurkin always had this beautiful swarm. But then, there's also a couple of groups in Europe that really have been in the forefront of this, and in particular, at EPFL, [Dario] Floreano's group and [Alcherio] Martinoli's group and [Marco] Dorigo's group in Brussels. So there's huge-- they've always been interested in swarm robotics. I sort of came to that a little late. Many of them I met at conferences and I talked and they were these conversations where you run into somebody. I ran into Alcherio Martinoli, at an AI conference and we were sort of some of the few people doing robotics at that AI conference. We ended up chatting and then, five hours later—[laughs] we touched so many topics. I had no idea that there was somebody who was interested in so many of the same topics I was interested in. That relationship turned out to be really great.
They designed robots like the E-Puck, and his group was pushing numbers. At that point they had swarms of about 200 robots, and that robot was one of the first robots I bought. I bought robots that other people had designed, in order to use in my class or to use in my research. But you always sort of run into this thing where it's actually a great thing for classes, but for research, you always want to modify the robot in some desperate way. We just sort of had lots of robots, and the TERMES robots are climbing robots.
We sort of started with those, but then, Mike Rubenstein joined my group, as a post doc, and his thesis was really closely related to things that I liked in my Ph.D. thesis. We had a lot in common, He thought about self-repair and developmental biology and self-assembly, so I thought he was going to come to my group and we were going to work on future, sort of, theoretical ideas. We did for a few months talk about various ones, but at the end of a couple of months [laughs] he came to me and he's saying, "You know, actually, what I want to do is build a thousand robots." I said "You've got to be kidding me?" [Laughs] Have you seen the E-Pucks? We can't handle twenty of them. You've got to be kidding me, with three TERMES. Okay. Well, so what's going to make you succeed where everybody else failed because the problems are very real?"
Just as James had influenced me, James and other people had influenced him. He had the list of problems and we sort of started thinking about what were the key issues that stop you at a hundred. He just turned the question around, and said, "You know, what if we start designing by assuming from the beginning that we'll make 1000? We won't say we'll make ten and then we'll make twenty and then we'll scale up. We can only scale-- we can only go backwards. So we're going to start at 1000." If you start at 1000, there's a whole bunch of things you're just not allowed to do. You can't have manual labor in designing these robots. You can't have too much cost per robot. You need a lot of things to be made by pick and place machines.
Once we started that trajectory, it was really more obvious how we were going to get there. The two sort of pretty innovative ideas that Mike came up with were using vibration motors instead of regular wheels. And also, sort of, instead of using regular wireless or regular sort of IR into the upper [laughs] part of the atmosphere, using reflected IR, is just a lot more robust. So there were lots of problems I knew people had with robot-to-robot communication. So his technique basically avoided a lot of those problems. And the vibration motors were really fun. I mean, we'd seen a lot of the bristlebots and toothbrushes, and so we put these things on. But he said, "I think I've figured it out." We'd sit there and we'd bend the legs and everything, and it turned out that we must've spent six months or so on the locomotion strategy. Not realizing that we actually had no understanding [laughs] of what we were doing. Finally, after every time we would make a robot, it would locomote differently. So in one case, he bent the legs backwards and the robot went front. In the one case, he bends them back and the robot went backwards. We're thinking "Okay. [Laughs] This doesn't make any sense."
Somebody from another group, Mahadevan’s group, suggested that we put it under high-speed photography to see it. Then, we realized that the locomotion was really different from what we thought. Once we realized that it turned out that the vibration motors have a bias. What he had done is, in one robot, he had placed them one way and another in another way, but they look symmetric, so you don't really notice that you were doing that. That's why all our robots were behaving differently. So sometimes, you think what you're doing, but especially with mechanical behavior and new mechanical behavior, often, you really need to test out your intuition. Otherwise, it's so easy to be wrong.
I think a lot of what I've learned in the last five, six years, is just continually trying things because you try them and it gives you new intuition and new ideas. If you wait until you figure you have the whole problem solved, you may have just missed something that was really crucial to thinking about the problem. I think that's maybe how a lot of roboticists feel, that the real world is much more complex than we want to give it credit for, or than we want it to be. But sometimes, it can go the other way around, and it can actually be easier. So, the vibration motors actually turned out to be easier than we had predicted. You didn't need to bend the legs, at all. The legs were actually irrelevant. [Laughs] It turned out we didn't have to have any precision on how the legs were put in. Our life became easier as a result of understanding what was going on.
Positioned as you are between, sort of, biology and computer science and robotics and the sort of longer trajectories of self-assembly, self-organizing systems, bionics, biologically inspired, what does robotics really bring to this equation? What have you learned from these sort of older histories?
I think that a lot of the areas that you mentioned, we think of physical system, and I think that, in that sense, computer science and robotics go very well together. Robotics is that part of computer science that touches physical systems directly, but it also brings in mechanical engineering and electrical engineering. What are other ways of thinking about the same problem? I find that, if you think about programmable materials, it’s not really a robot. It’s not a robot in any sense of the word that we might’ve thought a robot meant, but it is an essential part of robotics because it's basically endowing physical things with behavior. And to me, one of the interesting things is that I now look at everything in biology as a robot; right? It has behavior. It has mechanics. It has physics. It has an interaction with the world and an interaction, and often the questions in biology are the same. How much comes from the body versus the brain? Or even if you were thinking of cell, how much comes from physics versus a cell's active decision to do something?
When I interact with…and we were teaching biologists to do robotics, what he was saying was he felt that robotics helps the biologist think algorithmically about cells. What is the algorithm that a cell runs? And I was just so-- my jaw [laughs] was on the floor; "He said the word algorithm." I think it works both ways. If you think about biological systems as running programs, that actually gives you a certain power in thinking about biology, as well. Robotics brings a huge flavor to all of these fields, and you can think of it also as AI brings because it's the same concept. But the fact that there's also a physical-- there has to be a physical part to it. It's not just what decision a cell makes. It's also about the forces a cell experiences. Or, people will think about, "Well, maybe, there's external forces on something and that causes the cells to align" Or maybe the cells are actually measuring forces from nearby cells and aligning." That's a big difference between what kind of program is in the cell in the first case and the second case, and people want to know the difference. They want to know if they have to manipulate the genetic code of the cell or they need to manipulate just the environment and the cells will correctly heal something.
A lot of these questions are really important, but they're kind of the dual of questions in robotics. Now that I've had a taste of building physical systems, it would be very hard to go back. I think that that's actually also the part that robotics brings is it's really fun and it's really tangible. For me, anything, sensor networks, smart houses, all of these are sort of not so separate from robotics. As you may have already heard, there are a lot of people who work in a lot of cross areas with robotics. But I think it's just embedding computation into the physical world in some way is something that roboticists feel comfortable with.
Graduate and Post Doc Students
Who are some graduate students or post docs that you trained and have gone to do work in robotics?
Mostly in robotics, I've had very recent students. Michael Rubenstein is now a research scientist, as is Justin Werfel and Kirstin Hagelskjaer Petersen are my most recent set. Most of my earlier students did AI, more AI, more theory, and some biology. We do work with biologists, as well.
Unfortunately, we're running out of time.
Advice for Young People
The wrap-up question is, what's your advice for young people who might be interested in a career in robotics?
That's a tough one. [Laughs] Just do it. Just do it. It's really fun, and I think a lot of times we spend too much time worrying about what we're doing. Especially, smart people spend too much time [laughs] worrying about what they're doing. Sometimes you need to just do it and find what you like. I didn't start by liking robotics. I started by liking something else. I don't promise that robotics is the only thing I'm going to do with my life. I don't know. Every year, I'm like, "This is what I love the most." Five years down the road, it's something different. If you like robotics now, do it now. Why worry about anything else? [Laughs]
Thank you very much.