- GPU Technology Conference 2016
- 05 Apr 2016 06:21:25 pm
- Last edited by Alex on 05 Apr 2016 08:30:16 pm; edited 2 times in total
GTC was notably smaller and less populated than CES, but I expected as much despite GTC attendance doubling since 2012. By 2pm the exhibition floor was not nearly as crowded as it was at 11a after the keynote. Thankfully, unlike CES16 I was able to try an Oculus Rift in 45 minutes, I booked my appointment then walked around the floor and had lunch. When it was close to my appointment time I got in the line and they set me up. They had 8 booths, 4 Oculus and 4 Vive and since some of the Oculus experiences were seated they had a few more Oculus slots per day than the Vive. I played EVE: Valkyrie and it definitely took some getting use to.
Unlike PSVR at CES I did get a little sense of motion sickness. Now, I'm not one to get motion sickness but I attribute this case to the fact that I was turning my head and piloting the space ship all while planted in a chair. I should have been all well and good, I'm sitting down in the cockpit. However, I'm pretty sure the only times I felt a bit dizzy was when I was both looking around and maneuvering. It also took me a while to remember that I could look around. I'd look around but just small head movements, about halfway through my time I realized I could look farther left, right, up and, down. I managed to get 1 kill through both my games.
There was a cool service that I saw. To set some background, I do occasionally browse imgur and stumbled across this album on the front page of neural network images. I quickly registered on the site and set some images up to be processed. About 3 hours later the site redirected to their Facebook page with a post that read that the site was seeing higher than expected volume from outside countries. All images in the queue were purged and access was (now) only permitted from within Russia. I cried. Ended up finding the source on Github, because it was open source. Turns out, there's another website that does this stuff! http://deepart.io. The wait time to have an image processed is 6 hours so I'll share my results tomorrow but the booth were doing them in 15 minutes. I could pay a couple euros for a 15 minute turn around but I don't mind waiting.
This brings me into the keynote. I wouldn't have a lot to write about if I didn't get into the keynote. I had a seat right up front behind the rows of press. It started a few minutes after 9am but while we waited there was a great screen saver of sorts on the stage screen. That was fun to watch for about half an hour Once underway Jen-Hsun, the CEO and co-Founder of NVIDIA, mentioned the 5 areas he'd be speaking on today: The NVIDIA SDK, Tesla P100, Iray VR, Autonomous Cars plus HD Mapping and, the DGX-1. I didn't take notes during this whole thing and decided to take photos on my phone pretty late through the keynote so I'll only touch on the stuff I remember.
Autonomous Cars and HD Mapping
They actually announced some of this stuff at CES16 but I missed it. Well, I saw their exhibit. I saw the car. I just didn't see anything to read or really know what was going on and all the representatives were busy so I carried on. But, they announced the PX-2 at CES16. And today they showed what it's capable of. It supports up to 12 cameras and takes in all that data and a point cloud of the scenery and uploads that point cloud to servers where it's processed. So rather than continuously uploading 12 HD video streams, you have a highly compressed point cloud. This isn't to directly control the car but to collect map data of the roads which will allow the cars to have super concise and accurate data about lane markers, signs, exits and, much more.
As far as autonomous driving goes NVIDA is fully committed to deep learning and neural networks, announcing the DGX-1 and P100 as well as the supporting SDKs. In fact, if you want to read up on how neural networks can help a car learn to drive Bloomberg ran a great article on George Hotz, aka geohotz, and his 2016 Acura. It's worth a read but in summary George bought a 2016 Acura which has no autonomous driving capabilities. His goal is to make the software and cameras available for $1000. No word on the additional supporting hardware though. Just seriously read it though.
Cemetech Plug: We have a discussion topic right now about autonomous driving! *cough cough*
DGX-1
And that's all I feel comfortable touching on, directly. Although indirectly I want to touch on the DGX-1. There was so much talk about how this will make deep learning much easier. What is Deep Learning? Deep Learning is what computers use to derive results from data. You could, in essence, give the computer thousands of images of a certain type of painting style then ask it to paint a scene. You could give the computer hundreds to thousands of images of elephants then it'll be able to pick out elephants in pictures. A great example, actually, was announced yesterday. Facebook has made it easier for blind users to navigate and interact. Facebook now "speaks" image contents to the disabled (or I'm sure anyone if that option is (ironically?) enabled. I think this article by The Verge best describes it but basically these computers have learned what objects are and can differentiate that. In The Verge's example Facebook has found pizza, sadly doesn't say what kind, in one image and two people smiling in front of the ocean with the sky in another. That Deepart service I mentioned above? Uses Neural Networks and Deep Learning to redraw the base/submitted photo from a style photo.
Deep Learning is incredibly useful for images. Google uses it for their image results. When the service first started, it very likely relied on embedded keywords or words on the page the photo came from. Now, Googles Deep Learning systems can automatically determine what's in each photo. Deep learning is also useful for robots. Rather than programming how to do a task they tell the robot the basics - square peg through square hole - and let the robot learn over hundreds of tries. If you've been keeping up on AlphaGo, the machine utilized Deep Learning to learn how to play Go and beat a human expert. The programmers didn't program the machine how to play Go. Instead they taught it the basics and the machine learned by playing itself over and over and over again, each time learning new strategies. In fact, AlphaGo did a move that completely baffled the human player during one of the matches because it wasn't a logical... Only that move was an incredibly calculated move. Seriously, read up on that too.
Deep Learning and Autonomous Driving
This was another point NVIDIA touched on. They are building 2 autonomous race cars and entering them into the Formula-E ROBORACE. There will be 10 teams with 2 cars each. Each car will use the same brain/computer to make a level playing field, the aforementioned PX-2, but what can be different is how that information is processed. NVIDIA will be using their technology while other companies/teams will use theirs. Racing is a great place for this tech to take off; racing leagues have brought many technologies to modern cars.
2016 looks to be a great year!
Unlike PSVR at CES I did get a little sense of motion sickness. Now, I'm not one to get motion sickness but I attribute this case to the fact that I was turning my head and piloting the space ship all while planted in a chair. I should have been all well and good, I'm sitting down in the cockpit. However, I'm pretty sure the only times I felt a bit dizzy was when I was both looking around and maneuvering. It also took me a while to remember that I could look around. I'd look around but just small head movements, about halfway through my time I realized I could look farther left, right, up and, down. I managed to get 1 kill through both my games.
There was a cool service that I saw. To set some background, I do occasionally browse imgur and stumbled across this album on the front page of neural network images. I quickly registered on the site and set some images up to be processed. About 3 hours later the site redirected to their Facebook page with a post that read that the site was seeing higher than expected volume from outside countries. All images in the queue were purged and access was (now) only permitted from within Russia. I cried. Ended up finding the source on Github, because it was open source. Turns out, there's another website that does this stuff! http://deepart.io. The wait time to have an image processed is 6 hours so I'll share my results tomorrow but the booth were doing them in 15 minutes. I could pay a couple euros for a 15 minute turn around but I don't mind waiting.
This brings me into the keynote. I wouldn't have a lot to write about if I didn't get into the keynote. I had a seat right up front behind the rows of press. It started a few minutes after 9am but while we waited there was a great screen saver of sorts on the stage screen. That was fun to watch for about half an hour Once underway Jen-Hsun, the CEO and co-Founder of NVIDIA, mentioned the 5 areas he'd be speaking on today: The NVIDIA SDK, Tesla P100, Iray VR, Autonomous Cars plus HD Mapping and, the DGX-1. I didn't take notes during this whole thing and decided to take photos on my phone pretty late through the keynote so I'll only touch on the stuff I remember.
Autonomous Cars and HD Mapping
They actually announced some of this stuff at CES16 but I missed it. Well, I saw their exhibit. I saw the car. I just didn't see anything to read or really know what was going on and all the representatives were busy so I carried on. But, they announced the PX-2 at CES16. And today they showed what it's capable of. It supports up to 12 cameras and takes in all that data and a point cloud of the scenery and uploads that point cloud to servers where it's processed. So rather than continuously uploading 12 HD video streams, you have a highly compressed point cloud. This isn't to directly control the car but to collect map data of the roads which will allow the cars to have super concise and accurate data about lane markers, signs, exits and, much more.
As far as autonomous driving goes NVIDA is fully committed to deep learning and neural networks, announcing the DGX-1 and P100 as well as the supporting SDKs. In fact, if you want to read up on how neural networks can help a car learn to drive Bloomberg ran a great article on George Hotz, aka geohotz, and his 2016 Acura. It's worth a read but in summary George bought a 2016 Acura which has no autonomous driving capabilities. His goal is to make the software and cameras available for $1000. No word on the additional supporting hardware though. Just seriously read it though.
Cemetech Plug: We have a discussion topic right now about autonomous driving! *cough cough*
DGX-1
And that's all I feel comfortable touching on, directly. Although indirectly I want to touch on the DGX-1. There was so much talk about how this will make deep learning much easier. What is Deep Learning? Deep Learning is what computers use to derive results from data. You could, in essence, give the computer thousands of images of a certain type of painting style then ask it to paint a scene. You could give the computer hundreds to thousands of images of elephants then it'll be able to pick out elephants in pictures. A great example, actually, was announced yesterday. Facebook has made it easier for blind users to navigate and interact. Facebook now "speaks" image contents to the disabled (or I'm sure anyone if that option is (ironically?) enabled. I think this article by The Verge best describes it but basically these computers have learned what objects are and can differentiate that. In The Verge's example Facebook has found pizza, sadly doesn't say what kind, in one image and two people smiling in front of the ocean with the sky in another. That Deepart service I mentioned above? Uses Neural Networks and Deep Learning to redraw the base/submitted photo from a style photo.
Deep Learning is incredibly useful for images. Google uses it for their image results. When the service first started, it very likely relied on embedded keywords or words on the page the photo came from. Now, Googles Deep Learning systems can automatically determine what's in each photo. Deep learning is also useful for robots. Rather than programming how to do a task they tell the robot the basics - square peg through square hole - and let the robot learn over hundreds of tries. If you've been keeping up on AlphaGo, the machine utilized Deep Learning to learn how to play Go and beat a human expert. The programmers didn't program the machine how to play Go. Instead they taught it the basics and the machine learned by playing itself over and over and over again, each time learning new strategies. In fact, AlphaGo did a move that completely baffled the human player during one of the matches because it wasn't a logical... Only that move was an incredibly calculated move. Seriously, read up on that too.
Deep Learning and Autonomous Driving
This was another point NVIDIA touched on. They are building 2 autonomous race cars and entering them into the Formula-E ROBORACE. There will be 10 teams with 2 cars each. Each car will use the same brain/computer to make a level playing field, the aforementioned PX-2, but what can be different is how that information is processed. NVIDIA will be using their technology while other companies/teams will use theirs. Racing is a great place for this tech to take off; racing leagues have brought many technologies to modern cars.
2016 looks to be a great year!