Warning: Use of undefined constant plugins_url - assumed 'plugins_url' (this will throw an Error in a future version of PHP) in /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php on line 13

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831

Warning: Cannot modify header information - headers already sent by (output started at /home1/crsex/public_html/wp-content/plugins/restricted-to-adults/compat.php:13) in /home1/crsex/public_html/wp-includes/rest-api/class-wp-rest-server.php on line 1831
{"id":11917,"date":"2017-12-30T00:18:05","date_gmt":"2017-12-30T06:18:05","guid":{"rendered":"http:\/\/costaricasex.com\/teaching-machines-to-teach-themselves\/"},"modified":"2017-12-30T00:18:05","modified_gmt":"2017-12-30T06:18:05","slug":"teaching-machines-to-teach-themselves","status":"publish","type":"post","link":"http:\/\/costaricasex.com\/teaching-machines-to-teach-themselves\/","title":{"rendered":"Teaching machines to teach themselves"},"content":{"rendered":"
\"File
How can computers learn to teach themselves new skills?
\nbaza178\/Shutterstock.com<\/a><\/span><\/figcaption><\/figure>\n

Arend Hintze<\/a>, Michigan State University<\/a><\/em><\/p>\n

Are you tired of telling machines what to do and what not to do? It\u2019s a large part of regular people\u2019s days \u2013 operating dishwashers, smartphones and cars. It\u2019s an even bigger part of life for researchers like me, working on artificial intelligence and machine learning.<\/p>\n

Much of this is even more boring than driving or talking to a virtual assistant. The most common way of teaching computers new skills \u2013 such as telling apart photos of dogs from ones of cats \u2013 involves a lot of human interaction or preparation. For instance, if a computer looks at a picture of a cat and labels it \u201cdog,\u201d we have to tell it that\u2019s wrong.<\/p>\n

But when that gets too cumbersome and tiring, it\u2019s time to build computers that can teach themselves, and retain what they learn. My research team and I have taken a first step toward<\/a> the sort of learning that people imagine the robots of the future will be capable of \u2013 learning by observation and experience, rather than needing to be directly told every little step of what to do. We expect future machines to be as smart as we are, so they\u2019ll need to be able to learn like we do.<\/p>\n

[youtube https:\/\/www.youtube.com\/watch?v=q4y8YAMPFhk?wmode=transparent&start=0]
The 2012 movie \u2018Robot & Frank\u2019 features a robot that can learn on its own.<\/span><\/figcaption><\/figure>\n

Setting robots free to learn on their own<\/h2>\n

In the most basic methods of training computers, the machine can use only the information it has been specifically taught by engineers and programmers. For instance, when researchers want a machine to be able to classify images into different categories, such as telling apart cats and dogs, we first need some reference pictures of other cats and dogs to start with. We show these pictures to the machine, and when it guesses right we give positive feedback, and when it guesses wrong we apply negative feedback.<\/p>\n

This method, called reinforcement learning, uses external feedback to teach the system to change its internal workings in order to guess better next time. This self-change involves identifying the factors<\/a> that made the biggest differences<\/a> in the algorithm\u2019s decision, reinforcing accuracy and discouraging wrong decisions.<\/p>\n

Another layer of advancement sets up another computer system to be the supervisor, rather than a human. This lets researchers create several dog-cat classifier<\/a> machines, each with different attributes \u2013 perhaps some look more closely at color, while others look more closely at ear or nose shape \u2013 and evaluate how well they work. Each time each machine runs, it looks at a picture, makes a decision about what it sees and checks with the automated supervisor to get feedback.<\/p>\n

Alternatively or in addition, we researchers turn off the classifier machines that don\u2019t do as well, and introduce new changes to the ones that have done well so far. We repeat this many times, introducing small mutations into successive generations of classifier machines, slowly improving their abilities. This is a digital form of Darwinian evolution \u2013 and it\u2019s why this type of training is called a \u201cgenetic algorithm.\u201d But even that requires a lot of human effort \u2013 and telling cats and dogs apart is an extremely simple task for a person.<\/p>\n

Learning like people<\/h2>\n

Our research is working toward a shift from a present in which machines learn simple tasks with human supervision, to a future in which they learn complicated processes on their own. This mirrors the development of human intelligence: As babies we were equipped with pain receptors that warned us about physical damage, and we had an instinct to cry when hungry or otherwise in need.<\/p>\n

Human babies learn a lot on their own<\/a>, and also learn a lot from direct instruction by parents specifically teaching vocabulary and specific behaviors<\/a>. In the process, they learn not only how to interpret positive and negative feedback, but how to tell the difference \u2013 all on their own. We\u2019re not born knowing that the phrase \u201cgood job\u201d means something positive, and that the threat of a \u201ctimeout\u201d implies negative consequences. But we figure it out \u2013 and quite quickly. As adults, we can set our own goals and learn to accomplish them fully autonomously; we are our own teachers.<\/p>\n

Our brains add each new experience and insight to our abilities and memories, using a capability called neuroplasticity<\/a> to make and store<\/a> new connections between neurons<\/a>. There are several ways<\/a> to use neuroplasticity in computational systems<\/a>, but these computational methods all still rely on feedback from an outside supervisor \u2013 something externally tells them what is right and wrong. (The method called \u201cunsupervised learning<\/a>\u201d is not quite accurately named: It doesn\u2019t involve algorithms that can change themselves, and used a process quite different from what humans would understand as \u201clearning.\u201d)<\/p>\n

Figuring out a maze puzzle<\/h2>\n

The recent research my group and I have conducted takes a first step toward AI systems with neuroplasticity that do not require supervision. A key problem in doing this involves how to get a computer to give itself feedback that is somehow meaningful or effective.<\/p>\n

We didn\u2019t actually know how to do that \u2013 in fact, it\u2019s one of the things we\u2019re learning about while analyzing our results. We use Markov Brains<\/a>, a type of artificial neural network, as the basis of our research. But instead of designing them directly, we used another machine learning technique, a genetic algorithm, to train these Markov Brains.<\/p>\n

The challenge we set was to solve a maze using four buttons, which moved forward, backward, left and right. But the controls\u2019 functions changed for each new maze \u2013 so the button that meant \u201cforward\u201d last game might mean \u201cleft\u201d or \u201cbackward\u201d in the next. For a person solving this challenge, the reward would be not only in navigating through the maze but also in figuring out how the buttons had changed \u2013 in learning.<\/p>\n

Evolving a good solution-finder<\/h2>\n

In our setup, the Markov Brains that solved mazes fastest \u2013 the ones that learned the controls and moved through the maze most quickly \u2013 survived the genetic selection process. At the beginning of the process, each algorithm\u2019s actions were pretty much random. Just as with human players, randomly hitting buttons will only rarely get through the maze \u2013 but that strategy will succeed more often than doing nothing at all, or even just pressing the same button over and over.<\/p>\n

If our research had involved keeping the buttons and maze structure constant, the Markov Brains would eventually learn what the buttons meant and how to get through the maze most quickly. They would immediately hit the correct sequence of buttons, without paying attention to the environment. That\u2019s not the sort of learning we\u2019re aiming for.<\/p>\n

By randomizing both the button configurations and the maze structure, we force the Markov Brains to pay more attention, pressing a button and noticing the change to the situation \u2013 what direction that button moved through the maze, and whether that is toward a dead end or a wall or an open pathway. This is more advanced learning, to be sure. But a Markov Brain that evolved to navigate using only one or two button configurations could still do well: It would solve at least some mazes very quickly \u2013 even if it didn\u2019t solve others at all. That doesn\u2019t provide the adaptability to the environment that we\u2019re looking for.<\/p>\n

The genetic algorithm, which decides which Markov Brains to select for further evolution and which to discontinue, is the key to optimizing response to the environment. We told it to select the Markov Brains that were the best overall solvers of mazes (rather than those that were blindingly fast on some mazes but utterly unable to solve others), choosing generalists over specialists.<\/p>\n

Over many generations, this process produces Markov Brains that are particularly observant of the changes that result from pressing a particular button and very good at interpreting what those mean: \u201cPressing the button that moves left took me into a dead end; I should press the button that moves right to get out of there.\u201d<\/p>\n

It is this ability to interpret observations that liberates the genetic algorithm-Markov Brain system from the outside feedback of supervised learning. The Markov Brains have been selected specifically for their ability to create internal feedback that changes their structure in ways that lead to pressing the correct button at the correct time more often. Technically, we evolved Markov Brains to be able to learn by themselves.<\/p>\n

This is indeed very similar to how humans learn: We try something, look at what happened and use the results to do better the next time. All of that happens within our brains, without the need for an external guide.<\/p>\n

\"TheOur work adds a new method to the field of machine learning<\/a>, and in our view takes a major step toward developing what is called \u201cgeneral artificial intelligence,\u201d systems that can learn new information and new skills on their own. It also opens the door for using computer systems to test how learning actually happens<\/a>.<\/p>\n

Arend Hintze<\/a>, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University<\/a><\/em><\/p>\n

This article was originally published on The Conversation<\/a>. Read the original article<\/a>.<\/p>\n

Article first appeared at TSG VICE. Click here to go there!<\/a><\/p>","protected":false},"excerpt":{"rendered":"

How can computers learn to teach themselves new skills? baza178\/Shutterstock.com Arend Hintze, Michigan State University Are you tired of telling machines what to do and […]<\/p>\n","protected":false},"author":2,"featured_media":11918,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[161],"tags":[],"_links":{"self":[{"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/posts\/11917"}],"collection":[{"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/comments?post=11917"}],"version-history":[{"count":0,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/posts\/11917\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/media\/11918"}],"wp:attachment":[{"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/media?parent=11917"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/categories?post=11917"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/costaricasex.com\/wp-json\/wp\/v2\/tags?post=11917"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}