Remove this Banner Ad

Innovations & Tech Machine Learning

🥰 Love BigFooty? Join now for free.

freebloke

Cancelled
Apr 8, 2016
1,852
2,217
AFL Club
St Kilda
Machine Learning is the new buzz word. But it's growing at a rapid rate. I really want to see what people here can think of what's possible with this technology.

Some background stuff on machine learning...

Machine learning is the subfield of computer science that "gives computers the ability to learn without being explicitly programmed" (Arthur Samuel, 1959).[1] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning explores the study and construction of algorithms that can learn from and make predictions on data[3] – such algorithms overcome following strictly static program instructions by making data driven predictions or decisions,[4]:2 through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible; example applications include spam filtering, detection of network intruders or malicious insiders working towards a data breach,[5] optical character recognition (OCR),[6] search engines and computer vision.

https://en.wikipedia.org/wiki/Machine_learning

Also see the following video on machine learning

 
We learn from our mistakes. Some of us don't. Is that what we want from M2M defence bots?

Drones and VTOLs will be an interesting new road patrol and security tool. Object lock follow drones will not be deterred. Swarms will run comms through encrypted m2m XML streams coordinating eyes

images

http://truthvoice.com/2015/05/illinois-police-get-approval-for-drone-use/
 
Could aged care bots save the worlds economy with aged care assistance. That machine that picks you up after a fall. That machine that manages your vitals. It would definitely bring down aged care in your home.
Continuous health care managements of your body in preventative medicine rather than reactive GP healthcare? With driver-less cars and a home health bot, a rendezvous with an ambulance increases chances of survival.
Then there is the fibrous exoskeleton that enables us to keep our strength with tendon assist. Anything that manages our vitals signs, upgrades and adapts as a holistic solution to better standard of living for us. We may not be so doomed after all.
Those Rombo vac bots make life easier already.
 

Log in to remove this Banner Ad

http://www.auntminnie.com/index.aspx?sec=ser&sub=def&pag=dis&ItemID=115278

Machine learning is the next frontier in radiology; packages that can read the enormous volumes of data CT and MRI scanners now produce are in serious development and could have applications in high volume series such as screening chest X-rays and mammograms.

There are, as is to be expected, ethical issues. The act of omission is always the thing that hurts you so when an algorithm fails to recognise a pathology (as it inevitably will) public trust will plummet.
 
http://www.auntminnie.com/index.aspx?sec=ser&sub=def&pag=dis&ItemID=115278

Machine learning is the next frontier in radiology; packages that can read the enormous volumes of data CT and MRI scanners now produce are in serious development and could have applications in high volume series such as screening chest X-rays and mammograms.

There are, as is to be expected, ethical issues. The act of omission is always the thing that hurts you so when an algorithm fails to recognise a pathology (as it inevitably will) public trust will plummet.
A couple of things.

Are you saying patients/public are/will be more forgiving of the human error?

Obermeyer and Emanuel use the example of a chest radiograph, in which the "digital pixel matrices" underlying the radiograph become "millions of individual variables" that can be analyzed by algorithms that combine lines and shapes and learn the contours of fracture lines and other examples of pathology.

Is it your experience that the more images you look at the more confident you are of a diagnosis? Or do you still rely on other diagnostic tools to help in your gut feel?
 
With the wide acceptance of the internet. people are very likely to interact via posting (ie not in real time) with people they have never met.

The Turing test takes on a new meaning because it is much easier for AI to 'fool' a person via internet communication than what Turing would have envisaged. the level of AI needed for this task would be magnitudes lower than trying to fool a person face to face, or via voice communications.

Who knows, it may already be happening

ironically such a mchine would have much more problems fooling/interacting with another machine which is programmed to detect bots
 
Machines will be poor learners until scientists figure out how to develop artificial algorithms that can do the equivalent of understanding others intentions.

Wasn't the thrust of minority report that the systems utilized humans where neccesary? Living things have millenia of basic stimuli programming encoded in their genes. Animals still utilize most of it for survival. For humans it comes out in dreams
 
A couple of things.

Are you saying patients/public are/will be more forgiving of the human error?

Obermeyer and Emanuel use the example of a chest radiograph, in which the "digital pixel matrices" underlying the radiograph become "millions of individual variables" that can be analyzed by algorithms that combine lines and shapes and learn the contours of fracture lines and other examples of pathology.

Is it your experience that the more images you look at the more confident you are of a diagnosis? Or do you still rely on other diagnostic tools to help in your gut feel?

Just to clarify, I work in radiology but I'm not a radiologist.

On the first issue it's not my experience that the public are forgiving of any medical error, and nor should they be. But radiologists are humans and humans make mistakes from time to time. Radiology reports are often opinions that require correlation with another test, be that another radiology test or pathology.

Turning that human expertise over to a machine is an enormous leap of faith. This collection of pixels has these properties, which is the same as this pathology, therefore the patient requires these further investigations. Everything works from patterns.

It's human nature that over time they'll question the machine findings less and less. These machines will make a mistake, because that is how life works - something doesn't fit the pattern.

On the second issue, yes experience makes a good radiologist better. There is a limit, however to what any one test can indicate. The more complicated the pathology the more tests are required.
 
tay_20160324_hitler_512.jpg


Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]


https://en.wikipedia.org/wiki/Tay_(bot)
 
Last edited:
tay_20160324_hitler_512.jpg


Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]


https://en.wikipedia.org/wiki/Tay_(bot)

Access to the Internet has changed the game completely from turings day for example.

Both in what it can do and what it can 'learn'

Was 'Tay' a failure or just too successful? Or ahead of its time?
 

Remove this Banner Ad

When they first tried to create a computer that could defeat the world chess champion, they called it deep thought. It "learnt" from its opponents. Such that it would lose the first couple of games against its opponent but having "learnt" what that opponent would do in a particular situation, it would play accordingly. It would win more later games against opponents as it learnt from them.

It played Karpov in a classic match, resigned with thirty minutes still on its clock, while Karpov was down to 30 seconds.

It never beat Kasperov.

Programmers decided to throw learning out the window, and built a machine that would use sheer brute force, and built deep blue. This machine did not learn, but rather considered every move no matter how bad a move chess learning would tell you it was. For example at the starting position it is generally accepted that moving the rooks pawn is a poor first move. In deep thought's case it would have learnt this fairly early in the piece. For deep blue, it didn't worry about any of this, and considered every single possible move in every single possible situation and it's pathways to the end for every single game position. This machine beat Kasperov.
 
When they first tried to create a computer that could defeat the world chess champion, they called it deep thought. It "learnt" from its opponents. Such that it would lose the first couple of games against its opponent but having "learnt" what that opponent would do in a particular situation, it would play accordingly. It would win more later games against opponents as it learnt from them.

It played Karpov in a classic match, resigned with thirty minutes still on its clock, while Karpov was down to 30 seconds.

It never beat Kasperov.

Programmers decided to throw learning out the window, and built a machine that would use sheer brute force, and built deep blue. This machine did not learn, but rather considered every move no matter how bad a move chess learning would tell you it was. For example at the starting position it is generally accepted that moving the rooks pawn is a poor first move. In deep thought's case it would have learnt this fairly early in the piece. For deep blue, it didn't worry about any of this, and considered every single possible move in every single possible situation and it's pathways to the end for every single game position. This machine beat Kasperov.
the problem with such an approach - brute force - is that its exponential computing power.

an example of it is the N nodes problem, which attempts to find the shortest path to visit each node only once.

10 nodes leads to 1,814,400 unique solutions.
20 nodes leads to 1,216,451,004,088,320,000 unique solutions

if it took 1 micro-second to calculate each solution, it'd take 38,000 years to brute force the 20 node solution......completely impractical to a ridiculous degree

deep blue had a more limited scope of brute forcing - it only planned 6-7 moves ahead and would re-calculate if kasparov did something it hadn't planned for


brute force is almost never the solution unless you are sure that the scope of the problem is limited to an acceptable time-frame. As an aside, machine learning and 'smart' AI have come ALONG way since the days of the kasparov vs deep blue
 
the problem with such an approach - brute force - is that its exponential computing power.

an example of it is the N nodes problem, which attempts to find the shortest path to visit each node only once.

10 nodes leads to 1,814,400 unique solutions.
20 nodes leads to 1,216,451,004,088,320,000 unique solutions

if it took 1 micro-second to calculate each solution, it'd take 38,000 years to brute force the 20 node solution......completely impractical to a ridiculous degree

deep blue had a more limited scope of brute forcing - it only planned 6-7 moves ahead and would re-calculate if kasparov did something it hadn't planned for


brute force is almost never the solution unless you are sure that the scope of the problem is limited to an acceptable time-frame. As an aside, machine learning and 'smart' AI have come ALONG way since the days of the kasparov vs deep blue
Yep, understand all this, just giving a bit of historical perspective. Bear in mind though, computational power is also increasing exponentially. My field of research was molecular computer components. We may find programming in spaghetti code and just letting the grunt do its job is where we are headed anyway with the lazy coding we see around these days.
 
"Suggest more views" cries the radiologist.

Sometimes that's the right thing to recommend, but in the age of "defensive medicine" referrers can feel put upon.
 
Just browsing a library copy of New Scientist and came across these 3 articles.

Googling your kidneys
The app delivers information to iPhones in the form of push notifications, reminders or alerts. Its current version focuses on acute kidney injury (AKI). To detect people at risk of AKI, the Streams system processes information from blood tests – as well as other data, such as patient observations and histories – and flags any anomalous results to a clinician.

Searching your records

The document – a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust – gives the clearest picture yet of what the company is doing and what sensitive data it now has access to.

The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years.

Google not goggles

The project will target two of the most common eye diseases – age related macular degeneration and diabetic retinopathy. More than 100 million people around the world have these conditions.

The information that Moorfields is providing includes scans of the back of people’s eyes, as well as more detailed scans known as optical coherence tomography (OCT). The idea is that the images will let DeepMind’s neural networks learn to recognise subtle signs of degenerating eye conditions that even trained clinicians have trouble spotting.


This could make it possible for a machine learning system to detect the onset of disease before a human doctor could. The earlier the better, says Gadi Wollstein, an eye doctor at the University of Pittsburgh. “Patients are losing tissue and the loss is irreversible,” he says. “The longer we’re waiting, the worse the [outcome].”



Have some concerns over the privacy implications and the why? I don't believe in altruism from businesses
 

🥰 Love BigFooty? Join now for free.

Access to the Internet has changed the game completely from turings day for example.

Both in what it can do and what it can 'learn'

Was 'Tay' a failure or just too successful? Or ahead of its time?
Tay worked perfectly, you put Twitter in you get Twitter out.
 

Remove this Banner Ad

Innovations & Tech Machine Learning

🥰 Love BigFooty? Join now for free.

Back
Top