Skyscrapers are rising rapidly around the world, continuously transforming city skylines.Will a friendly neighborhood Spider-Man help out?No, but Chinese researchers at the Shenyang Institute of Automation (SIA) of the Chinese Academy of Sciences have designed a promising alternative.Recently, they reported the development of a contact aerial manipulator system that shows high flexibility and strong mission adaptability.They presented their findings at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), an international conference on robotics and intelligent systems held from Nov. 4-8 in Macao.It comprises a single-degree-of-freedom manipulator cube-frame end effector and a hex-rotor UAV system.
In today's factories and warehouses, it's not uncommon to see robots whizzing about, shuttling items or tools from one station to another.But they have a much harder time winding through narrow spaces to carry out tasks such as reaching for a product at the back of a cluttered shelf, or snaking around a car's engine parts to unscrew an oil cap.Now MIT engineers have developed a robot designed to extend a chain-like appendage flexible enough to twist and turn in any necessary configuration, yet rigid enough to support heavy loads or apply torque to assemble parts in tight spaces.The appendage design is inspired by the way plants grow, which involves the transport of nutrients, in a fluidized form, up to the plant's tip.Likewise, the robot consists of a "growing point," or gearbox, that pulls a loose chain of interlocking blocks into the box.The researchers presented the plant-inspired "growing robot" this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau.
In the not too distant future, robots may be dispatched as last-mile delivery vehicles to drop your takeout order, package, or meal-kit subscription at your doorstep -- if they can find the door.Standard approaches for robotic navigation involve mapping an area ahead of time, then using algorithms to guide a robot toward a specific goal or GPS coordinate on the map.Imagine, for instance, having to map in advance every single neighborhood within a robot's delivery zone, including the configuration of each house within that neighborhood along with the specific coordinates of each house's front door.Mapping every single house could also run into issues of security and privacy.Instead, their approach enables a robot to use clues in its environment to plan out a route to its destination, which can be described in general semantic terms, such as "front door" or "garage," rather than as coordinates on a map.The new technique can greatly reduce the time a robot spends exploring a property before identifying its target, and it doesn't rely on maps of specific residences.
Researchers at MIT are helping autonomous cars deliver on the promise of safer roads with a new trick that lets driverless vehicles see around corners to pre-emptively spot other vehicles or moving hazards that human drivers would never see coming.There have been several attempts to make cameras that are able to see around corners, including other MIT researchers who revealed a system that can shine light into a room from the outside, capture the light that’s bounced back, and then process the results to calculate a 3D model of objects inside that are otherwise hidden from human observers.It required a special camera, however, including lasers and other hardware that would inevitably increase the cost of an autonomous vehicle, which would, in turn, hurt sales.You didn’t think all these carmakers are developing driverless cars for fun, did you?The new approach to spotting oncoming hazards around corners is being presented at the International Conference on Intelligent Robots and Systems in Macau, China, next week, and it builds and improves on an earlier system called ShadowCam that was developed a few years prior.Instead of using laser scanners or x-ray technology, the system uses video cameras focused on a very specific area, which in this case is the ground where two perpendicular roads or paths meet.
We tend to oversell the "scariness" of robots, right?The Boston Dynamics robot does a backflip, or parkour and we're cracking jokes about the revolution and our potential robot overlords.But honestly, how bad could it be, right?How about robot snakes that have learned how to climb up ladders.The above abomination is the creation of Kyoto University and the University of Electro-Communications.It was unveiled last week at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems.
When we last met with Salto the jumping robot it was bopping around like a crazed grasshopper.Now researchers have added targeting systems to the little creature, allowing it to maintain a constant hop while controlling exactly when and where Salto lands.Called “deadbeat foot placement hopping control” the Salto can now watch a surface for a target and essentially fly over to where it needs to land using built-in propellers.Researchers Duncan Haldane, Justin Yim and Ronald Fearing created the Salto as part of the Army Research Office and they will be exhibiting the little guy at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems.The team upgraded Salto’s controller to make it far more precise on landing, a feat that was almost impossible using the previous controller system, SLIP.“The robot behaves more or less like a spring-loaded inverted pendulum, a simplified dynamic model that shows up often enough in both biology and robotics that it has its own acronym: SLIP,” wrote Evan Ackerman at IEEE.
By using gestures, users can match their movements to the robot's to complete various tasks."A system like this could eventually help humans supervise robots from a distance," says CSAIL postdoctoral associate Jeffrey Lipton, who was lead author on a related paper about the system."By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now."They presented the paper this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver.With these systems, a delayed signal could lead to nausea and headaches, and the user's viewpoint is limited to one perspective.The CSAIL team's system is halfway between these two methods.
The robot can walk, move over pipes, climb stairs and ladders, and crawl over scattered debris.Honda has unveiled E2-DR, a functioning prototype bipedal robot that is designed for disaster relief efforts, according to a report in IEEE Spectrum.The 5'5" tall robot, unveiled at the International Conference on Intelligent Robots and Systems in Vancouver, Canada, is a flexible and fully-articulated structure which can walk, move over pipes, climb stairs and ladders, and crawl over scattered debris.It weighs around 85kg and walks at 4kmph (quadrupedal walking at 2.3kmph), but can bear harsh conditions thanks to dust and splash proofing.A plethora of sensors and cameras also allow it to 'see' in differently-lit environments.Unlike other bots, the E2-DR can walk through 26 mm/hour of rain for 20 minutes and can operate in temperatures ranging from -10 to 40 degrees Celsius.
Most people associate Honda with just vehicles, but it has an extensive product line that includes power equipment, engines, and robots.Now, the Japanese company has unveiled what could be the future of disaster recovery efforts: E2-DR.At the International Conference on Intelligent Robots and Systems (IROS) 2017 in Vancouver, Honda showed a working prototype of its disaster response robot.The bipedal machine was first announced in a paper at IROS 2015, and we now know a bit more about its abilities.Standing at around 5’5” and weighing 187 Ibs, E2-DR has a torso that can rotate 180 degrees and hands that can grip, allowing it to climb ladders and stairs.The robot walks at 1.2 mph, can step over objects, traverse debris, and endure 20 minutes of rain.
The 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is celebrating its 30th anniversary in Vancouver this week.Things kick off this morning, with 18 technical tracks all running at the same time.There are over 1,200 papers being presented this year, meaning that we have a packed schedule over the next three days, trying (as we always do) to attend over a hundred talks plus posters and competitions and keynotes and plenaries and forums and the expo even though it’s physically impossible to do everything.We’ll happily die trying, though, and if there are specific things you’re interested in (check out the conference website to see what’s going on), leave a comment or ping us at @BotJunkie and @AutomatonBlog on Twitter and we’ll do our absolute best for you.This week (and over the next few weeks as well), you can expect posts featuring the best technical papers from IROS, as well as a special Video Friday full of weird and amazing new robots.And as always, if you’re at the conference, let us know what cool things you’ve seen so that we can help bring them to the rest of the world.
From self-folding robots, to robotic endoscopes, to better methods for computer vision and object detection, researchers at the University of California San Diego have a wide range of papers and workshop presentations at the International Conference on Intelligent Robots and Systems (or IROS) which takes place from Sept. 24 to 28 in Vancouver, Canada.UC San Diego researchers also are organizing workshops on a range of themes during the event.I am very pleased to see that we have a strong showing at this flagship conference."Robots and humans are becoming increasingly integrated in various application domains, conference organizers explain on the IROS 2017 website.Soft robotics is one way to create robots that are not dangerous for humans and the research group of roboticist Michael Tolley is exploring the field with three papers at IROS 2017.Better interactions between robots and people also require improving computer vision and researchers led by computer scientist Laurel Riek are proposing using depth information to do so in one paper.
Those blockbuster Marvel movies only show our favourite superheroes when they’re out saving the world.If you want a glimpse of what heroes like Iron Man do the rest of the time, look no further than this incredibly articulated humanoid robot called TEO, who’s recently learned to iron clothing.TEO’s been in development at the Universidad Carlos III de Madrid in Getafe, Spain, for a few years already, and has already mastered challenges such as climbing stairs and opening doors.They’re not terribly exciting achievements as far as robotic breakthroughs go, but learning those skills has helped TEO move one step closer to becoming more like The Jetsons’ robotic maid Rosie, than Tony Stark’s Iron Man.In a paper accepted for publication at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver, Canada (pre-print available here), researchers at the Universidad Carlos III de Madrid’s Robotics Lab detail the creation of a new algorithm that allows TEO to identify and remove wrinkles in a garment placed on an ironing board using a depth-sending camera in its head, regular room lighting, and without knowing what unwrinkled pants, shirts, or dresses are supposed to look like.The algorithm studies the garment laid out before it, and breaks it down into thousands of individual points that are assigned a value between 0 and 1.