AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
200 Stories
All Topics

The Allen Institute for AI Icon The Allen Institute for AI

China to overtake US in AI research

China has committed to becoming the world leader in AI by 2030, with goals to build a domestic artificial intelligence industry worth nearly $150 billion (according to this CNN article). Prompted by these efforts, the Semantic Scholar team at the Allen AI Institute analyzed over two million academic AI papers published through the end of 2018. This analysis revealed the following:

Our analysis shows that China has already surpassed the US in published AI papers. If current trends continue, China is poised to overtake the US in the most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of most-cited papers by 2025. Citation counts are a lagging indicator of impact, so our results may understate the rising impact of AI research originating in China.

They also emphasize that US actions are making it difficult to recruit and retain foreign students and scholars, and these difficulties are likely to exacerbate the trend towards Chinese supremacy in AI research.

OpenAI Icon OpenAI

OpenAI creates a "capped-profit" to help build artificial general intelligence

OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI):

We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.

The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity.

To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think?

Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.

Casey Newton The Verge

The secret lives of Facebook moderators in America

Eventually Artificial Intelligence will take over the human powered content moderation jobs for Facebook. Until then, this small population of humans employed by Cognizant (on behalf of Facebook) in Phoenix, Arizona accept the job of subjecting themselves to the worst of humankind to provide “a better Facebook experience.”

Casey Newton writes for The Verge:

The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.

Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.

No one tries to comfort her. This is the job she was hired to do…

AI (Artificial Intelligence)

A response to OpenAI's new dangerous text generator

Those of you following AI related things on Twitter have probably been overwhelmed with commentary about OpenAI’s new GPT-2 language model, which is “Too Dangerous to Make Public” (according to Wired’s interpretation of OpenAI’s statements). Is this discussion frustrating or confusing for you?

Well, Ryan Lowe from McGill University has published a nice response article. He discusses the model and results in general, but also gives some perspective on the ethical implication and where the AI community should go from here. According to Lowe:

The machine learning community really, really needs to start talking openly about our standards for ethical research release

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA's PhysX project goes open source and beyond gaming

PhysX is NVIDIA’s hardware-accelerated physics simulation engine that’s now released as open source to move it beyond its most common use case in the gaming world, to give access to the embedded and scientific fields — think AI, robotics, computer vision, and self-driving cars.

PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.

Abhishek Singh Medium

Getting Alexa to respond to sign language using your webcam and Tensorflow.js

Abhishek Singh isn’t deaf or mute, but that didn’t stop him from asking the question:

If voice is the future of computing interfaces, what about those who cannot hear or speak?

This thought led to a super cool project wherein a computer interprets sign language and speaks the results to a nearby Alexa device. Live demo here and code here.

Getting Alexa to respond to sign language using your webcam and Tensorflow.js

YouTube Icon YouTube

Solving Flappy Bird with Deep Reinforcement Learning [31:48]

I’m relatively familiar with Machine Learning at this point, but I had never heard of Reinforcement Learning until I watched this excellent talk by Kaleo Ha’o at ML4ALL.

I knew it was going to be good as soon as he laid out this comparison: if Machine Learning is teaching computers by example, then Reinforcement Learning is teaching computers by experience. Fascinating stuff!

OpenAI Icon OpenAI

OpenAI Fellows — Fall 2018 (now open)

As we gear up for the launch of Practical AI and more AI/ML/DS related news coverage, I wanted to bring to your attention to this 6-month apprenticeship (compensated) in AI research at OpenAI.

We’re now accepting applications for the next cohort of OpenAI Fellows, a program which offers a compensated 6-month apprenticeship in AI research at OpenAI. We designed this program for people who want to be an AI researcher, but do not have a formal background in the field. Applications for Fellows starting in September are open now and will close on July 8th at 12AM PST.

Apply here.

James Vincent The Verge

Google’s AI sounds like a human on the phone — should we be worried?

Ok, so I’m equally excited and concerned by this AI demo.

James Vincent writes for The Verge:

The most impressive demo at Google I/O was a phone call to book a haircut. This call wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

You have to hear this AI phone call for yourself! There’s a video of the demo embedded in this post.

OpenAI Icon OpenAI

Preparing for malicious uses of AI

Elon Musk – SpaceX, Tesla, and co-creator of OpenAI – says this in a related video on YouTube:

I am concerned about certain directions AI could take that would be not good for the future. I think it would be fair to say that not all AI future’s are benign. If we create some artificial super intelligence that supersedes us in every way by a lot, it’s very important that that be benign.

Elon goes on to talk more specifically about his fears of AI and that if we have this incredible power, that it not be concentrated in the hands of a few. He doesn’t exactly say Google, but everyone knows that’s who he means.

From OpenAI:

We’ve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This paper is the outcome of almost a year of sustained work with our colleagues at the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, the Electronic Frontier Foundation, and others.

Medium Icon Medium

Announcing AI Fund

Andrew Ng shared his plans for his newly created AI Fund — with investments including NEA, Sequoia, Greylock Partners, SoftBank Group, and others.

Andrew Ng:

I am excited to announce the formation of the AI Fund. We have raised $175 million, and will be sequentially initiating new businesses that use AI to improve human life.

In the early days of electricity, much of the innovation centered around slightly different improvements in lighting. While this was an important foundation, the really transformative applications, in which electric power spurred massive redesigns in multiple industries, took longer to be grasped. AI is the new electricity, and is at a similar inflection point.

Given Andrew Ng’s prominence and success in bringing AI to industry, and his partnership with some of the world’s premier technology investment firms, this announcement may well signal the next wave of capitalization for AI-oriented startups.

0:00 / 0:00