From: FORTUNE Eye on A.I. - Tuesday Sep 10, 2019 03:04 pm
At the intersection of artificial intelligence and industry.
Eye on A.I. | Newsletters

Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.


follow
Subscribe
Send Tip
September 10, 2019

A pioneer in artificial intelligence says conventional companies can still distinguish themselves in A.I. despite worries that tech giants like Google and Amazon have already won.


Andrew Ng, a prominent Silicon Valley executive and investor who previously led some of the biggest A.I. projects at Google and its Chinese rival Baidu, says the next wave of A.I. will be in industries in which the tech giants aren’t firmly rooted. Think manufacturing, agriculture, and healthcare.


Ng is a bit biased considering that his latest venture, Landing AI, helps traditional companies adopt A.I. But he makes a compelling argument that established companies still have a chance.


Speaking at TechCrunch’s business technology conference in San Francisco last week, Ng likened the current state of A.I. to the Internet’s rise in the 1990s. Companies like Apple, Microsoft, and FedEx were not Internet natives, he explained, but they were able to become “Internet companies” by creating new businesses that depended on the Web.


For instance, Apple was primarily a computer maker, but it eventually created a huge business out of its Internet-driven app store. These companies did more than merely create websites and apps and then call it a day.


Likewise, traditional companies that haven’t embedded A.I. into their businesses still have time to do so. It just won’t be as simple as buying a cloud software service “where you swipe your credit card and you use it and now your company is A.I.-enabled,” Ng said.


Instead, executives at traditional businesses must think hard about how they can apply deep learning, a key component of artificial intelligence, to their specific needs. For them, Ng has a few tips. For instance, agriculture companies could affix sensors to their farming equipment to collect data about their fields and then use A.I. techniques to analyze that data to obtain better crop yields.


The challenge is that current deep learning techniques, many of which were created by the tech giants, only work well with enormous quantities of data. Non-tech companies, like agricultural businesses, may have to develop their own A.I. techniques that rely on only small amounts of farm data, Ng said.


But if agricultural companies create neural networks—the foundational software used for data training— that learn from small amounts of data, it would be a huge breakthrough. This could level the playing field between the A.I.-powered tech giants and conventional businesses.


“One of the myths we tell in Silicon Valley is that whenever there is disruptive technology, the startups always win,” Ng says. “That’s not true.”


***


I’d like to direct your attention to Fortune‘s latest newsletter, The Loop, which covers the business of sustainability. As Fortune‘s Eamon Barrett explains, The Loop will provide “the latest on environmentalism in the boardroom, call out corporations that aren’t pulling their weight, and highlight advances in tech and policy designed to usher in greater sustainability.” Interested readers can subscribe here.


Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com


.


.

A.I. IN THE NEWS


The Pentagon and A.I. ethics. The Pentagon is looking to hire an ethics expert who can help the Defense Department navigate some of A.I.’s most pressing ethical concerns, trade publication Defense Systems reported. The news comes amid employee protest at companies like Google over the potential military uses of A.I. and the company’s role in selling the government powerful, data-crunching technology.


The Department of Energy’s A.I. office. The U.S. Department of Energy created the DOE Artificial Intelligence and Technology Office, which is intended to coordinate the department’s A.I. projects as part of the White House’s national A.I. strategy. Energy Sec. Rick Perry said in statement that the new office would “concentrate our existing efforts while also facilitating partnerships and access to federal data, models and high performance computing resources for America’s AI researchers.”


Singapore’s A.I. dreams. Singapore is trying to cultivate an A.I. technology scene and remain a neutral A.I. player between China and the U.S., Bloomberg News reported. The island city-state’s government is investing $500 million on A.I.-related projects through 2020, and the nation is now home to A.I. research offices of Alibaba and Salesforce.


Academic A.I. brain drain. The New York Times reported on a study showing the impact on universities and the startups they produce when A.I. professors leave their full-time academic positions to work at corporations. The study “focused on the start-up economy, showing that departures led to fewer student start-ups,” the Times reported, noting that “experts are split on whether a decline in the start-up economy will harm the progress of A.I.”


DEEP LEARNING DOUBTS


Computer science experts Gary Marcus and Ernest Davis write in The New York Times about the limitations of deep learning technologies and why other A.I. approaches are important. The two write: “In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.”


.

Content From Accenture

How to stop A.I. From Reinforcing Biases

Algorithms do what they’re taught—unfortunately, some may be influenced by historical biases hidden in the training data. What can organizations do to build algorithms responsibly and implement AI with confidence and trust?
Find out
.

EYE ON A.I. TALENT


Online music service Spotify hired Tony Jebara as president of engineering for personalization and to lead its machine-learning strategies. Jebara, also a Columbia University computer science professor, was previously a machine learning director at Netflix.


EYE ON A.I. RESEARCH


Deep learning’s gender problem. The Pew Research Center published a study about the difficulties deep-learning systems have identifying people’s genders based on their photos. The study showed that gender-classification systems generally work better when they are trained with a diverse set of photos representing multiple age-groups and ethnicities.


In some cases, however, the researchers found that gender-classification systems can occasionally work well when trained on less diverse datasets, which the Pew Research team found surprising and confusing. 


A.I.-aided drug discovery. Researchers from biotechnology firm Insilico Medicine published a paper in the Nature Biotechnology journal about using A.I. techniques to significantly increase the amount of time it takes to create molecules useful for drug discovery. The researchers’ technology used a combination of reinforcement learning—a type of A.I. that learns through many trials—and so-called generative adversarial networks, which can be used to create realistic, but fake photos, among other tasks.


FORTUNE ON A.I.


Alarmed By Deepfake Videos, Facebook Creates Contest to Detect Them – By Jeremy Kahn


Most Americans Distrust Companies Using Facial Recognition Technology – By Jonathan Vanian


Deepfake App Zao Makes You a Movie Star. But It Also Raises Big Privacy Concerns – By Alyssa Newcomb



.

BRAIN FOOD


Making A.I. safe for the U.S. and the rest of the world. Researchers in the U.S. and China must work together on A.I. to ensure that the technology is safe, writes Matt Sheehan, a fellow at the Paulson Institute's MacroPolo think tank, in Bloomberg. Sheehan is concerned that competition between the two countries in A.I. could lead to U.S. lawmakers severing ties between U.S. and Chinese A.I. researchers, who sometimes collaborate and communicate with each other during A.I. conferences and on research projects. Doing so, he writes, “threatens to create a dangerous knowledge vacuum on AI safety precisely when we need smart, strategic cooperation between scientists to mitigate these risks. In this case, engagement will make the U.S. far safer than isolation.”


.
Subscribe
share: Share on Twitter Share on Facebook Share on Linkedin
.
This message has been sent to you because you are currently subscribed to Eye on A.I..
Unsubscribe

Please read our Privacy Policy, or copy and paste this link into your browser:
https://fortune.com/privacy/

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
225 Liberty Street
New York, NY 10128


Advertising Info | Subscribe to Fortune