Environmental interaction and navigation Robots

Environmental interaction and navigation robots Radar, GPS, and lidar, are all combined to provide proper navigation and obstacle avoidance(vehicle developed for 2007 DARPA Urban Challenge) Human-robot interaction[edit]Though a significant percentage of robots in commission today are either human controlled or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular, unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns’ driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints.

Kismet can produce a range of facial expressions. The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation.

Speech recognition:

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[110] The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent. Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first “voice input system” which recognized “ten digits spoken by a single user with 100% accuracy” in 1952. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.

Robotic voice:

Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium, making it necessary to develop the emotional component of robotic voice through various techniques.

Gestures:

One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate “down the road, then turn right”. It is likely that gestures will make up a part of the interaction between humans and robots. A great many systems have been developed to recognize human hand gestures.

Facial expression:

Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos). The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.

Artificial emotions:

Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots.

Personality:

Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future. Nevertheless, researchers are trying to create robots which appear to have a personality i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.

Social Intelligence:

The Socially Intelligent Machines Lab of the Georgia Institute of Technology researches new concepts of guided teaching interaction with robots. The aim of the projects is a social robot that learns task and goals from human demonstrations without prior knowledge of high-level concepts. These new concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. The results are demonstrated by the robot Curi who can scoop some pasta from a pot onto a plate and serve the sauce on top.

Control units:

 Puppet Magnus, a robot-manipulated marionette with complex control systems

 RoBot II can resolve manually Rubik cubes

Control system:

The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors) which move the mechanical.

The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands. Sensor fusion may first be used to estimate parameters of interest (e.g. the position of the robot’s gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction) is inferred from these estimates. Techniques from control theory convert the task into commands that drive the actuators.

At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a “cognitive” model. Cognitive models try to represent the robot, the world, and how they interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.

Autonomy levels:

TOPIO, a humanoid robot, played ping pong at Tokyo IREX 2009.

Control systems may also have varying levels of autonomy.

  1. Direct interaction is used for haptic or teleoperated devices, and the human has nearly complete control over the robot’s motion.
  2. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
  3. An autonomous robot may go without human interaction for extended periods of time . Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous but operate in a fixed pattern.

Another classification takes into account the interaction between human control and the machine motions.

  1. Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
  2. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
  3. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
  4. Full autonomy. The machine will create and complete all its tasks without human interaction.

Research:

Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them. Other investigations, such as MIT’s cyberflora project, are almost wholly academic.

A first particular new innovation in robot design is the open sourcing of robot-projects. To describe the level of advancement of a robot, the term “Generation Robots” can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have the intelligence comparable to that of a monkey. Though fourth generation robots, robots with humanintelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050.

The second is evolutionary robots. This is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots, and to explore the nature of evolution. Because the process often requires many generations of robots to be simulated, this technique may be run entirely or mostly in simulation, then tested on real robots once the evolved algorithms are good enough. Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.

Dynamics and kinematics:

In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for “optimal” performance and ways to optimize design, structure, and control of robots must be developed and implemented.The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end-effector acceleration. This information can be used to improve the control algorithms of a robot.

Bionics and biomimetics:

Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump.

Education and training:

The SCORBOT-ER 4u educational robot

Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics. Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA,[135] as well as in numerous youth summer camps, raising interest in programming, artificial intelligence, and robotics among students. First-year computer science courses at some universities now include programming of a robot in addition to traditional software engineering-based coursework.

Career training:

Universities offer bachelors, masters, and doctoral degrees in the field of robotics. Vocational schools offer robotics training aimed at careers in robotics.

Certification:

The Robotics Certification Standards Alliance (RCSA) is an international robotics certification authority that confers various industry- and educational-related robotics certifications.

Summer robotics camp:

Several national summer camp programs include robotics as part of their core curriculum. In addition, youth summer robotics programs are frequently offered by celebrated museums and institutions.

Robotics competitions:

There are lots of competitions all around the globe. One of the most important competitions is the FLL or FIRST Lego League. The idea of this specific competition is that kids start developing knowledge and getting into robotics while playing with Legos since they are 9 years old. This competition is associated with Ni or National Instruments.

Robotics afterschool programs:

Many schools across the country are beginning to add robotics programs to their after school curriculum. Some major programs for afterschool robotics include FIRST Robotics Competition, Botball and B.E.S.T. Robotics. Robotics competitions often include aspects of business and marketing as well as engineering and design.

The Lego company began a program for children to learn and get excited about robotics at a young age.

Employment:

A robot technician builds small all-terrain robots. (Courtesy: MobileRobots Inc, Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising. The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long term investment for benefactors. A paper by Michael Osborne and Carl Benedikt Frey found that 47 per cent of US jobs are at risk to automation “over some unspecified number of years”. These claims have been criticized on the ground that social policy, not AI, causes unemployment.

 

Occupational safety and health implications:

A discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).

The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defence, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers’ exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.

Despite these advances, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the “man-robot merger”. Some European countries are including robotics in their national programmes and trying to promote a safe and flexible co-operation between robots and operators to achieve better productivity. For example, the German Federal Institute for Occupational Safety and Health (BAuA) organises annual workshops on the topic “human-robot collaboration”.

In future, co-operation between robots and humans will be diversified, with robots increasing their autonomy and human-robot collaboration reaching completely new forms. Current approaches and technical standards aiming to protect employees from the risk of working with collaborative robots will have to be revised.

Stay tuned for updates Like & follow us on:

Official E-Mail Id : Mechgin@gmail.com

FB Page: http://www.facebook.com/Mechgin

Whatsapp Link: https://chat.whatsapp.com/llrCmkr2plFsdTTDuePPN

Youtube Channel Link: https://www.youtube.com/watch?v=pkVFVGcUQaI&list=PLq6KoK1dnuycEhD1EuaNCyxktq6E4RkHA

Stay tuned for More updates from ——-TeamMechgin

Chinna

MechGIN: A platform for all Engineering students who are unknown about technical study of the engineering. Here the site provides you full current affairs in scientific education role of your studies.

More Posts - Website

Follow Me:
TwitterFacebookLinkedInPinterestGoogle PlusYouTube