Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

A hub of Artificial Intelligence resources by MIT


A team led by Media Lab Associate Professor Cynthia Breazeal has launched aieducation.mit.edu to share a variety of online activities for K-12 students to learn about artificial intelligence, with a focus on how to design and use it responsibly. Learning resources provided on this website can help to address the needs of the millions of children, parents, and educators worldwide who are staying at home due to school closures caused by Covid-19, and are looking for free educational activities that support project-based STEM learning in an exciting and innovative area.

A mural of hopes and questions about artificial intelligence from a school workshop.


The website is a collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning, serving as a hub to highlight diverse work by faculty, staff, and students across the MIT community at the intersection of AI, learning, and education.

"MIT is the birthplace of Constructionism under Seymour Papert. MIT has revolutionized how children learn computational thinking with hugely successful platforms such as Scratch and App Inventor. Now, we are bringing this rich tradition and deep expertise to how children learn about AI through project-based learning that dovetails technical concepts with ethical design and responsible use," says Breazeal.

The website will serve as a hub for MIT's latest work in innovating learning and education in the era of AI. In addition to highlighting research, it also features up-to-date project-based activities, learning units, child-friendly software tools, digital interactives, and other supporting materials, highlighting a variety of MIT-developed educational research and collaborative outreach efforts across and beyond MIT. The site is intended for use by students, parents, teachers, and lifelong learners alike, with resources for children and adults at all learning levels, and with varying levels of comfort with technology, for a range of artificial intelligence topics. The team has also gathered a variety of external resources to explore, such as Teachable Machines by Google, a browser-based platform that lets users train classifiers for their own image-recognition algorithms in a user-friendly way.
In the spirit of "mens et manus"—the MIT motto, meaning "mind and hand"—the vision of technology for learning at MIT is about empowering and inspiring learners of all ages in the pursuit of creative endeavors. The activities highlighted on the new website are designed in the tradition of constructionism: learning through project-based experiences in which learners build and share their work. The approach is also inspired by the idea of computational action, where children can design AI-enabled technologies to help others in their community.

"MIT has been a world leader in AI since the 1960s," says MIT professor of computer science and engineering Hal Abelson, who has long been involved in MIT's AI research and educational technology. "MIT's approach to making machines intelligent has always been strongly linked with our work in K-12 education. That work is aimed at empowering young people through computational ideas that help them understand the world and computational actions that empower them to improve life for themselves and their communities."

Research in computer science education and AI education highlights the importance of having a mix of plugged and unplugged learning approaches. Unplugged activities include kinesthetic or discussion-based activities developed to introduce children to concepts in AI and its societal impact without using a computer. Unplugged approaches to learning AI are found to be especially helpful for young children. Moreover, these approaches can also be accessible to learning environments (classrooms and homes) that have limited access to technology.
As computers continue to automate more and more routine tasks, inequity of education remains a key barrier to future opportunities, where success depends increasingly on intellect, creativity, social skills, and having specific skills and knowledge. This accelerating change raises the critical question of how to best prepare students, from children to lifelong learners, to be successful and to flourish in the era of AI.

It is important to help prepare a diverse and inclusive citizenry to be responsible designers and conscientious users of AI. In that spirit, the activities on aieducation.mit.edu range from hands-on programming to paper prototyping, to Socratic seminars, and even creative writing about speculative fiction. The learning units and project-based activities are designed to be accessible to a wide audience with different backgrounds and comfort levels with technology. A number of these activities leverage learning about AI as a way to connect to the arts, humanities, and social sciences, too, offering a holistic view of how AI intersects with different interests and endeavors.
The rising ubiquity of AI affects us all, but today a disproportionately small slice of the population has the skills or power to decide how AI is designed or implemented; worrying consequences have been seen in algorithmic bias and perpetuation of unjust systems. Democratizing AI through education, starting in K-12, will help to make it more accessible and diverse at all levels, ultimately helping to create a more inclusive, fair, and equitable future.



AI app can listen to your cough & detect COVID-19



EPFL researchers have developed an artificial intelligence-based system that can listen to your cough and indicate whether you have COVID-19.
With the new Coughvid app, developed by five researchers at EPFL's Embedded Systems Laboratory (ESL), you can record your cough on a smartphone and find out whether you might have COVID-19. So how can a smartphone app detect the new coronavirus? "According to the World Health Organization, 67.7% of people who have the virus present with a dry cough—producing no mucus—as opposed to the wet cough typical of a cold or allergy," says David Atienza, a professor at EPFL's School of Engineering who is also the head of ESL and a member of the Coughvid development team. The app is still being developed and will be released in the next few weeks.

Free and anonymous

Once the app is available, users will simply need to install it and record their cough—the results will appear immediately. "We wanted to develop a reliable, easy-to-use system that could be deployed for large-scale testing," says Atienza. "It's an alternative to conventional tests." In addition to being easy to use, the app has the advantage of being non-invasive, free and anonymous. "The app has a 70% accuracy rate," he adds. "That said, people who think they may have the disease should still go see their doctor. Coughvid is not a substitute for a medical exam."

Using artificial intelligence to help patients

Coughvid uses artificial intelligence to distinguish between different types of coughs based on their sound. "The idea is not new. Doctors already listen to their patients' coughs to diagnose whooping cough, asthma and pneumonia," says Atienza.
Right now his team is collecting as much data as possible to train the app to distinguish between the coughs of people with COVID-19, healthy people, and people with other kinds of respiratory ailments. "We'll release the app once we've accumulated enough data. It could take a few more weeks," says Atienza. In the meantime, COVID-19 patients who would like to contribute to the development work can record their cough at https://coughvid.epfl.ch/ or on the Coughvid mobile app.



2020 Artificial Intelligence Industry Trends






As of date some industries have been doing better than others at implementing AI. AI is lifting efficiency and performance across many diverse industries in 2020, with an array of benefits sweeping across multiple categories. As AI matures, these 9 areas and industries look like they are poised to benefit the most:



Financial Services

Banking and financial services are among the key industries benefiting from AI. The structured nature of financial data and the industry's past experiences with analytics may have made it easier for companies in this sector to implement artificial intelligence.

Many legacy organizations [in other industries] are still learning how to move from pilot projects to full operational deployments since the underlying data and architecture is very different in a pilot versus being operationalized. Financial services and insurance firms have been better at the advanced analytics on transitions since that data is well understood and managed.

In the future, the industry could see additional gains by applying AI to other areas beyond analytics, such as customer service.



Cybersecurity

Cybersecurity is likely to see big gains from AI in 2020. Already, many vendors are adding AI capabilities to their products. In its Top 10 Strategic Technology Trends for 2020, Gartner noted, "ML-based security tools can be a powerful addition to your toolkit when aimed at a specific high-value use case such as security monitoring, malware detection or network anomaly detection."

However, enterprises are likely going to need AI-based cybersecurity in order to counter new threats, which might make use of AI and machine learning themselves. As Forrester noted in its Predictions 2020, "The unfortunate reality will come to light that evil forces can adopt technologies such as AI and machine learning faster than security leaders can."






Predictive Maintenance

Many different industries, such as manufacturing, transportation, oil and gas, utilities, and even cloud computing data centers rely on complex machinery in order to stay in business. And any downtime of critical resources quickly results in significant financial losses. As a result, many organizations are investing in a combination of IoT sensors, computer vision, and/or machine learning technology to help them improve uptime by proactively identifying potential risks and scheduling maintenance in advance.

AI is improving business performance with predictive maintenance, where deep learning analyzes large amounts of high-dimensional data to detect anomalies in everything from factory assembly lines to building HVAC systems to commercial aircraft engines.



Manufacturing

Manufacturing companies are beginning to dip their toes in the AI waters. In addition to predictive maintenance, manufacturing firms are using artificial intelligence and subsets like machine learning to better manage their supply chains, forecast demand, improve quality, deliver products, and increase customer satisfaction.

However, manufacturing firms will need to improve their underlying information architecture and data management practices if they want to be successful with their AI efforts. Even when machine learning algorithms do not need any reference architecture to function (visual identification of part defects for example), application of the results of the analysis does require that knowledge and information architecture.



Logistics and Transportation

Within the transportation industry, enterprises are using artificial intelligence to help them shave precious minutes off their delivery times and dollars off their costs. Spread across a large fleet, these small gains can result in millions of dollars of savings per year, alongside increases in customer satisfaction. In logistics and transportation, AI optimizes the routing of delivery traffic, improving fuel efficiency and reducing delivery times.

In addition, the transportation industry is also turning to AI to help with the creation of advanced safety systems, semi-autonomous vehicles, and eventually, fully autonomous vehicles. The result could be safer travel for everyone.



Travel

Like the transportation industry, the travel industry is also using machine learning (a subset of AI) to enhance logistics, which can allow them to reduce prices for customers. The travel industry is seeing positive results from AI in the area of fraud detection.

Criminals often target airlines and hotels to convert stolen credit card numbers into cash. They will book travel on the stolen card, and then attempt to get a refund back to their own personal cards, pocketing the difference. Others set up far more elaborate schemes where they book travel and then attempt to resell it on the black market. AI can help identify both kinds of fraudulent transactions, resulting in cost savings for the travel industry and their customers, as well as reducing the inconvenience for people with the stolen credit cards.



B2B

Another prime area for AI implementation right now is B2B, particularly B2B sales. B2B sales are benefiting from AI as well, with speech recognition making it possible to track and optimize every customer interaction, from research to early engagement to closing the sale.

In addition to speech recognition, AI firms are also making use of advanced machine learning and analytics for a variety of purposes. For example, within B2B, pricing often differs significantly from customer to customer, and machine learning can help them better segment their customers and price more effectively. Of course, they can also use machine learning for forecasting, supply chain management, and for uncovering other insights that can help them become more competitive.



Healthcare

With coronavirus on the news every day, everyone is interested in ways to improve healthcare, and artificial intelligence seems like one promising way to speed innovation in the field. Healthcare continues to be a prime application for AI. Algorithms can assist with tasks as diverse as analysis of scans, development of vaccines, interpreting research results, and improving patient care.

Again, experts caution, however, that in order for AI to be effective within healthcare, organizations need to have good data, solid training models, and the right IT infrastructure in place to both conduct the analysis and secure sensitive data.



Retail

For online stores like Amazon, AI has become such an expected part of the sales process that people don't even notice it anymore. In retail sales, combining customer demographic and past transaction data with social media behavior observation helps generate individualized 'next product to buy' recommendations, which is now routine for many retailers. Through 2020, look for these recommendation engines to continue to improve and for retailers to find new ways to implement AI.

One interesting data point is how democratized AI adoption is. AI flattens the competitive landscape, empowering smaller businesses to leapfrog and outmaneuver much bigger ones. Expect enterprises in retail and other industries to attempt to use AI technology to better compete against larger firms.



Artists explore AI


Adjusting to technological developments is not a new concept for the art world.

Wood panels were once the standard for paintings, but by the 17th century they were largely overtaken by canvas, and the paint itself changed, too. Video art, a mainstay now, was a new phenomenon in the 1960s.

More recently, augmented reality and virtual reality have captured the imagination of artists as ways to tell stories that we could not have imagined even 20 years ago.


"They Took the Faces From the Accused and the Dead … (SD18)," a grid of 3,240 mug shots, by Trevor Paglen. It is part of the show "Uncanny Valley: Being Human in the Age of AI" at the de Young Museum in San Francisco.




But the rise of artificial intelligence in art, a phenomenon in recent years, has a different cast to it. Not only is AI a tool for artists, who are employing machine intelligence in fascinating ways, it is also frequently a topic to be examined — sometimes in the same piece.

And underlying many of the works is a deep unease. As Lisa Phillips, the director of New York’s New Museum, put it, the worries come down to “the prospect that machines are going to take over.” She added, “What are we unleashing?”

Even the art market was alerted to a new realm when an AI-generated portrait that was initiated by the Paris-based art collective Obvious was sold for $432,500 at Christie’s in 2018. It was like a traditional portrait of a man, but his features were smudged and blurry.

Museums and other exhibition spaces have also produced a flurry of current and coming shows involving AI that were scheduled for this spring, some of them delayed after closings because of the coronavirus pandemic.

They include a survey of the subject, “Uncanny Valley: Being Human in the Age of AI,” at the de Young Museum in San Francisco scheduled through Oct. 25; “Future Sketches,” which was on view earlier this year at Artechouse Washington and is intended to move to Artechouse’s Miami space later this year; Trevor Paglen’s photography at the Altman-Siegel Gallery in San Francisco; and “Ed Atkins: Get Life/Love’s Work” scheduled for the New Museum from June 24 to Sept. 27.

Paglen is one of the best-known artists in the AI territory. His work on it, and on the subject of state surveillance, helped him win a John D. and Catherine T. MacArthur Foundation fellowship (the “genius” grant) in 2017.

“I’ve been working on it for a while,” Paglen said.

“Once I started thinking about it, I haven’t stopped.” He is based in New York, where he has two of his three studios; the other is in Berlin.

His work at Altman-Siegel tries to connect the surveying of the American West in the 19th century with the way computers perceive the world via the data they are given — how what is officially “seen” creates power dynamics.

Paglen has a work in “Uncanny Valley,” too, called “They Took the Faces From the Accused and the Dead … (SD18),” a grid of 3,240 mug shots, used without the subjects’ consent, from the American National Standards Institute, a nonprofit group founded in 1918 that helps set agreed-upon standards across industries, including a wide array of tech fields.

The images were used to train facial-recognition programs, and Paglen uses them to question “how is data weaponized,” he said.

It has been a theme for other artists, too: Because machines have to be trained by people, what implicit biases are being passed on along the way?

“We live in a world in which things are being sorted into categories that are not inherent in nature,” Paglen said.

In addition to critiquing AI, Paglen has used it to create art. For his 2017 series “Adversarially Evolved Hallucinations,” he created an AI system that made a series of images.

“I was making my own training sets,” he said. “I built the taxonomies from scratch.” The resulting works, including a view of what a computer thinks a man looks like, may strike some as a bit spooky.

The organizer of “Uncanny Valley,” Claudia Schmuckli — the chief contemporary curator at the Fine Arts Museums of San Francisco, which includes the de Young — said that in her view, the overall tone of the works in “Uncanny Valley,” which features the work of 14 artists or collectives, was one of “concern, rather than anxiety.”

“A lot of the works in this show look at AI as an applied form of machine learning, how it actually works, not the speculative fantasy of AI,” she said. “It may be that not a lot of deep thinking has occurred about the potential consequences in the long run.”

Schmuckli moved from Houston to San Francisco in 2016, and she said it was partly the postelection revelations about hacking, Facebook and the data firm Cambridge Analytica that got her thinking. “I felt like this was an area I needed to urgently understand,” she said.

In the tech-focused Bay Area, the show has hit a nerve.

“The turnout for the opening was wholly amazing,” Schmuckli said. “We saw a lot of people who have never stepped foot in this museum before.”

The for-profit exhibition space Artechouse, with branches in New York, Washington and Miami, focuses exclusively on the nexus of art and tech, as its name suggests.

“We thought it was a niche that needed to be filled,” said co-founder Tati Pastukhova. Since its founding in 2017, about half of its shows have touched on AI in some way.

The latest such exhibition, “Future Sketches,” is a collaboration with Zach Lieberman, an artist who is also an adjunct associate professor at MIT’s Media Lab (his university bio also calls him a “hacker”).

Perhaps befitting a full-time techie, his work has a more positive spin than that of some others working with AI. His Artechouse piece “Expression Mirror,” originally created for the 2018 London Design Biennale, reads the facial expressions of a user, tracking muscle movements at 68 points on the face.

But when people look at the “mirror,” they do not see themselves. “Your face is replaced with someone’s face who has used it before,” Lieberman said. “It matches your expressions, like a smile or frown, and it learns as it interacts.” He calls this a “face action coding system,” a version of a “fingerprint.”

Lieberman said he understood why some artists plumbed the dark side of AI, because of its long-term implications and because anything to do with machines could unsettle.

“It’s this black box that you feed things into,” he said. “It’s inscrutable in some way.”

But Lieberman said he encouraged a diversity of views on matters technological. “I think it’s important to create artworks for the public to have all kinds of conversations — be they critical or playful or anything else.”

Artechouse’s other founder, Sandro Kereselidze, struck a similar note.

“Everything in the world has a positive and negative side,” he said, adding that “it’s in our power” to explore both sides of AI.

“As long as we can find the off button on the computer.”

Posted by Jai Ponnappan 


AI & Finance




The implementation of computers into different finance processes is nothing
new; high speed trading and the dominance of algorithms in the markets is a trend
that has been discussed, analyzed, and reported on at length. 

In areas such as fraud detection, risk management, credit rating and wealth advisory, AI is already augmenting or even replacing human decision makers. In fact, not deploying AI capabilities in these fields can be considered disastrous. Withthe ever-increasing amounts of data that needs to be processed, AI systems are a must-have to improve accuracy.

The key point to remember during this conversation is that, as computers become increas-
ingly sophisticated there will also be drawbacks. As increasing amounts of trading
are connected to computers, programs, and algorithms that operate without direct
human oversight and intervention there is a possibility that large swings in the mar-
ket (that volatility word) will become more frequent. 

As technological capabilities continue to improve, the amount of available data grows, and competitive pressures mount, the use of AI in finance will be pervasive. However, as with any new technology the adoption of AI brings its very own set of challenges. There are a number of concerns often cited by regulators, customers and experts which can be grouped into the following categories:





  • Bias
  • Accountability
  • Transparency

Potential causes of Bias:



  • An AI model is biased when it takes decisions that can be considered as prejudiced against certain segments of the population. One might think that these are rare occurrences – as machines should be less ‘judgmental’ than humans. Unfortunately, as has been proven last year, they tend to be far more commonplace. AI failures can happen to even some of the largest companies in the world.
  • How do these biases happen? One reason why algorithms go rogue is that the problem is framed incorrectly. For instance, if an AI system calculating the creditworthiness of a customer is tasked to optimize profits, it could soon get into predatory behavior and look for people with low credit scores to sell subprime loans. This practice may be frowned upon by society and considered unethical, but the AI does not understand such nuances.
  • Another reason for unintended bias can be the lack of social awareness: The data fed into the system already contains the biases and prejudice that manifests the social system. The machine neither understands these biases nor can it consider removing them, it just tries to optimize the model for the biases in the system.
  • Finally, the data itself may not be a good representative sample. When there are low samples from certain minority segments, and some of these data points turn out to be bad, the algorithms could make some sweeping generalizations based on the limited data it has. This is not unlike any human decisions influenced by availability heuristics.

Accountability Challenges:


  • The question who’s responsible if AI makes a wrong decision. If a self-driving car causes an accident, should it be the fault of the owner who didn’t maintain the car correctly, or did not respond when the algorithm made a bad call? Or is it purely an algorithmic issue? What about our previous example of predatory pricing – within which time frame is the firm employing this algorithm supposed to know that something is amiss and fix it? And to what extent are they responsible for the damages?
  • These are very important regulatory and ethical issues which need to be addressed.There are risks related to the technology which need to be carefully managed, especially when consumers are affected. This is why it’s important to employ the concept of algorithmic accountability, which revolves around the central tenet that the operators of the algorithm should put in place sufficient controls to make sure the algorithm performs as expected.
Missing Transparency:


  • Many algorithms suffer from a lack of transparency and interpretability, making it difficult to identify how and why they come to particular conclusions. As a result, it can be challenging to identify model bias or discriminatory behavior. 
  • It’s fair to say that the lack of transparency and the prevalence of black box models is the underlying cause for the two challenges outlined above.


From anecdotal evidence and review of market commentary, it does seem that the
increasing technological dominance of trading may be leading to several different
effects. 


First, while volatility while judged by historically levels, has been at low levels
in the 2015–2018 time period this does not provide the entire picture. The decrease
in volatility may not, as some has speculated, be associated with the increased effi-
ciency generated by algorithmic trading programs, but rather a related trend. ETFs,
passive investing tools, and the growing (trillions as of this writing) assets invested
in these options may also be having an outsized impact on volatility and training
patterns. Put simply, as larger and larger percentages of investors and funds are
investing in similar, if not identical, trading tools and platforms, this may very well
have a depressive impact on market volatility. 

This may very well seem like a positive effect to retail investors with jitters linked to increases in market volatility, but masks an underlying problem. If investing decisions are made outside of human

oversight and supervision this can inadvertently lead to market selloffs, runoffs, and
other actions that do not reflect the underlying economic reality.

This is a tremendous opportunity for financial advisors, planners, and other advi-
sory focused finance professionals to offer real time, real world, and actionable
business insights to clients and customers in a market that can seem as it operates
outside the realm of normal possibility. Volatility, although depressed during 2017,
seems to have returned to the market with force in 2018, emphasizes the important
of having a professional behind the wheel of various automated services and pro-
cesses. Simply executing certain processes, trades, and business transactions faster
will offer no benefit to either the organization or clients if those said processes are
poorly written or designed. 

In order for practitioners to effectively leverage technology they must understand not only how the technology itself works, but also how it can – and should be – applied to the business decision making process itself.

Another area where can, and already is, having an impact on the financial ser-
vices landscape is the realm of ad hoc and management reporting, which constitutes
a rather large percentage of the actual work performed by professionals working in
the space. Generating reports for management and supervisors simultaneously
forms a plurality of work performed by many accounting professionals and a way
that professionals can quantitatively add value to the organization. Despite of this,
one of the key issues raised and problems associated with internal management
reporting, or ad hoc reporting, is that data is not generated consistently, systems do
not communicate with each other, and there are inevitably time lags between when
different classes of information are generated. 

In the context of accounting professionals seeking to elevate both themselves and the work performed internally, the amount of time spent correcting errors, manually adjusting entries and
information deprives professionals of the time necessary to instead focus on higher
level activities. In other words, if accountants are spending too much time manually
creating reports and fixing errors, those same professionals will never be able to
achieve the oft-cited role as strategic advisor or business partner.

Audit and attestation work, discussed previously and to be expanded upon
throughout this text, represents a prime area where artificial intelligence will have
an impact on the profession. Currently, the entire process of auditing has several
pain points, namely the fact that the final audit opinion is heavily (if not exclusively)
reliant on expanding on findings generated from a small sample of organizational
information. 

Even with the subsequent analytical procedures and substantive tests
added into the audit examination process, audit failures are all too common. AI
tools, such as those represented by the partnership between IBM Watson and
KPMG, are already having an dramatic impact on audit testing, procedures, and
how auditors interact with both clients and future clients. This evolution and transi-
tion, from a compliance oriented function that focused exclusively on financial
information, to a more comprehensive process that can operate on a continuous
basis also connects to several other trends. Introduced here, but examined in more
detail later in this book, the connection between assurance work, non-financial
information, and the importance of this data to the decision making process opens
up a proverbial work of opportunities for accounting practitioners.

Tax reporting and the discussion of taxation issues are normally not associated
with pleasant news or something that management professionals, but that is not some-
thing that should be perceived as the final state of the conversation. Specifically, and
even in the current environment beset by changes in tax reporting, this debate and
analysis can, and should, be perceived both as an opportunity and part of the continu-
ous management dialogue. Put simply, although the Tax Cuts and Jobs Acts was
passed right at the end of 2017 – December 22nd to be specific – the ripple effects as
a result of this legislation are still being analyzed and processed by both individuals
and organizations. Processing the sheer number of changes, running scenario analy-
ses, and putting the results of these analyses into a format and report that are under-
standable for management decision making is both a role accounting professionals
should play, and a function enabled by AI tools. Taxes have an impact on the bottom
line, will continue to guide investment and operational decisions moving forward, and
will play a prominent role in the implementation and analysis of AI.

For financial institutions, it is clear that guidelines need to be put in place to help avoid bias, ensure safety and privacy, and to make the technology accountable and explainable. AI doesn’t have to be a black box – there are ways to make it more intuitive to humans such as Explainable AI (XAI).

XAI is a broad term which covers systems and tools to increase the transparency of the AI decision making process to humans. The major benefit of this approach is that it provides insights into the data, variables and decision points used to make a recommendation. Since 2017, a lot of effort has been put into XAI to solve the black box problem. DARPA has been a pioneer in the effort to create systems which facilitate XAI and it has since gained industry-wide as well as academic interest. In the past year, we have seen significant increase in the adoption of XAI, with Google, Microsoft and other large technology players starting to create such systems.

There are still challenges to XAI. The technology is still nascent. And there are concerns that explainability compromises accuracy, or that adopting XAI compromises the IP of the firm. However, the success of AI will depend on our ability to create trust in the technology and to drive acceptance among users, customers and the broader public. XAI can be a game changer as it will help increase transparency and overcome many of the hurdles that currently prevent its adoption.


Coronavirus treatment trial uses AI to speed results

The first hospital network in the U.S. has joined an international clinical trial using artificial intelligence to help determine which treatments for patients with the novel coronavirus are most effective on an on-going basis.




Why it matters: In the midst of a pandemic, scientists face dueling needs: to find treatments quickly and to ensure they are safe and effective. By using this new type of adaptive platform, doctors hope to collect clinical data that will help more quickly determine what actually works.
“The solution is to find an optimal trade-off between doing something now, such as prescribing a drug off-label, or waiting until traditional clinical trials are complete.”
— Derek Angus, senior trial investigator and professor at University of Pittsburgh School of Medicine, told a press briefing
State of play: No treatments have been approved for COVID-19 yet. Researchers have made headway in mapping how the virus attaches and infects human cells — helping "guide drug developers, atom by atom, in devising safe and effective ways to treat COVID-19," National Institutes of Health director Francis Collins writes.
  • But new drugs take a long time to develop, partly because they must first be tested for safety before broadening to test for safety and efficacy.
  • While many companies are working on new treatments, others have focused on testing drugs for other conditions that have already met safety requirements.
What's new: The University of Pittsburgh Medical Center (UPMC) is the first American hospital system to join an international treatment trial called REMAP-COVID19, which is enrolling patients with COVID-19 in North America, Europe, Australia and New Zealand so far.


How it works: Starting Thursday, UPMC's system of 40 hospitals began offering the trial to patients who have moderate to severe complications from COVID-19, Angus said.
  • Patients in the trial will receive their current standard of care. About 12.5% will receive placebo at the launch and the rest will be randomly selected to multiple interventions with one or more antibiotics, antivirals, steroids, and medicines that regulate the immune system, including the drug hydroxychloroquine.
  • The platform, based on an existing one called REMAP-CAP, is integrated with UPMC's electronic health records and the data collected via a worldwide machine-learning system that continuously determines what combination of therapies is performing best.
  • As more data is collected, more patients will be steered toward the therapies doing well, Angus said.
  • The adaptive trial format, published Thursday in the journal Annals of the American Thoracic Society, can allow new treatments to be rolled into the trial.
"This idea came to us after the H1N1 [epidemic], when everyone scrambled to do traditional trials" but by the time those were established, the outbreak had moved on, Angus said. "We asked, how we can do this better."
The big picture: There are more than 400 listed clinical trials for treatments, therapies and vaccines related to COVID-19.



COVID-19 High Performance Computing Consortium



The COVID-19 High Performance Computing Consortium Bringing together the Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research. Over 402 petaflops, 105,334 nodes, 3,539,044 CPU cores, 41,286 GPUs, and counting.






The world's leading medical researchers are rushing to find a treatment for COVID-19 with the help of the most powerful and advanced supercomputers in the world.
Researchers aross the globe are submitting potential treatments and cures to the COVID-19 High Performance Computing Consortium.
The consortium, using a network of supercomputers and laboratotires, can run through simulations to narrow down or rule out drug compounds to use in a cure much faster than traditional methods.
"It's a means by which one can begin to analyze tremendously complex or large problems," says Vice President of Technical Computing at IBM Cognitive Systems Dave Turek. "Pharmaceutical companies may have billions of compounds that could be potential drugs."
Any researcher can submit proposals to the consortium for the supercomputes to run through.
"So, there are very novel techniques, specifically using A.I. on these supercomputers that are beginnign to speculate about new kinds of molecules that could be created to treat COVID-19," says Turek.

The COVID-19 High Performance Computing Consortium is a unique private-public effort spearheaded by the White House Office of Science and Technology Policy, the U.S. Department of Energy and IBM to bring together federal government, industry, and academic leaders who are volunteering free compute time and resources on their world-class machines.


Consortium partners include:

  • Industry
    • IBM
    • Amazon Web Services
    • AMD
    • Google Cloud
    • Hewlett Packard Enterprise
    • Microsoft
    • NVIDIA
  • Academia
    • Massachusetts Institute of Technology
    • Rensselaer Polytechnic Institute
    • University of Illinois
    • University of Texas at Austin
    • University of California - San Diego
    • Carnegie Mellon University
    • University of Pittsburgh
    • Indiana University
    • University of Wisconsin-Madison
  • Department of Energy National Laboratories
    • Argonne National Laboratory
    • Lawrence Livermore National Laboratory
    • Los Alamos National Laboratory
    • Oak Ridge National Laboratory
    • National Energy Research Scientific Computing Center
    • Sandia National Laboratories
  • Federal Agencies
    • National Science Foundation
      • XSEDE
      • Pittsburgh Supercomputing Center (PSC)
      • Texas Advanced Computing Center (TACC)
      • San Diego Supercomputer Center (SDSC)
      • National Center for Supercomputing Applications (NCSA)
      • Indiana University Pervasive Technology Institute (IUPTI)
      • Open Science Grid (OSG)
      • National Center for Atmospheric Research (NCAR)
    • NASA
Researchers are invited to submit COVID-19 related research proposals to the consortium via this online portal, which will then be reviewed for matching with computing resources from one of the partner institutions. An expert panel comprised of top scientists and computing researchers will work with proposers to assess the public health benefit of the work, with emphasis on projects that can ensure rapid results.
Fighting COVID-19 will require extensive research in areas like bioinformatics, epidemiology, and molecular modeling to understand the threat we’re facing and form strategies to address it. This work demands a massive amount of computational capacity. The COVID-19 High Performance Computing Consortium helps aggregate computing capabilities from the world's most powerful and advanced computers to help COVID-19 researchers execute complex computational research programs to help fight the virus.
About the Consortium, the HPC Systems & How to Join
Consortium members manage a range of computing capabilities that span from small clusters to some of the largest supercomputers in the world. As a member, you would support this crucial work by not only offering your computational resources, but also your deep technical capabilities and expertise to help COVID-19 researchers execute complex computational research programs. We hope that you will join us in this crucial mission.
We are currently providing broad access to portions of over 30 supercomputing systems, representing over over 402 petaflops, 105,334 nodes, 3,539,044 CPU cores, 41,286 GPUs, and counting. Their basic specifications are described below. Additional resources will be added as our consortium grows; please check back for updates.


Analyzing Corona Virus data with AI



Fully understanding and solving the coronavirus pandemic will be about the data. There’s no shortage of data sources that are growing hourly. Now nine organizations, business and academic, have formed a coalition to bring coronavirus data sources together, and added incentives for researchers who can apply modern data analysis and artificial intelligence to it. Leading this effort is the Silicon Valley company C3.ai

C3.ai, Microsoft Corporation, the University of Illinois at Urbana-Champaign (UIUC), the University of California, Berkeley, Princeton University, the University of Chicago, the Massachusetts Institute of Technology, Carnegie Mellon University, and the National Center for Supercomputing Applications at UIUC announced two major initiatives:

  • C3.ai Digital Transformation Institute (C3.ai DTI), a research consortium dedicated to accelerating the application of artificial intelligence to speed the pace of digital transformation in business, government, and society. Jointly managed by UC Berkeley and UIUC, C3.ai DTI will sponsor and fund world-leading scientists in a coordinated effort to advance the digital transformation of business, government, and society.
  • C3.ai DTI First Call for Research Proposals: C3.ai DTI invites scholars, developers, and researchers to embrace the challenge of abating COVID-19 and advance the knowledge, science, and technologies for mitigating future pandemics using AI. This is the first in what will be a series of bi-annual calls for Digital Transformation research proposals.
“The C3.ai Digital Transformation Institute is a consortium of leading scientists, researchers, innovators, and executives from academia and industry, joining forces to accelerate the social and economic benefits of digital transformation,” said Thomas M. Siebel, CEO of C3.ai. “We have the opportunity through public-private partnership to change the course of a global pandemic,” Siebel continued. “I cannot imagine a more important use of AI.”

Immediate Call for Proposals: AI Techniques to Mitigate Pandemic
Topics for Research Awards may include but are not limited to the following:
  1. Applying machine learning and other AI methods to mitigate the spread of the COVID-19 pandemic
  2. Genome-specific COVID-19 medical protocols, including precision medicine of host responses
  3. Biomedical informatics methods for drug design and repurposing
  4. Design and sharing of clinical trials for collecting data on medications, therapies, and interventions
  5. Modeling, simulation, and prediction for understanding COVID-19 propagation and efficacy of interventions
  6. Logistics and optimization analysis for design of public health strategies and interventions
  7. Rigorous approaches to designing sampling and testing strategies
  8. Data analytics for COVID-19 research harnessing private and sensitive data
  9. Improving societal resilience in response to the spread of the COVID-19 pandemic
  10. Broader efforts in biomedicine, infectious disease modeling, response logistics and optimization, public health efforts, tools, and methodologies around the containment of rising infectious diseases and response to pandemics, so as to be better prepared for future infectious diseases
The first call for proposals is open now, with a deadline of May 1, 2020. Researchers are invited to learn more about C3.ai DTI and how to submit their proposals for consideration at C3DTI.ai. Selected proposals will be announced by June 1, 2020.
Up to $5.8 million in awards will be funded from this first call, ranging from $100,000 to $500,000 each. In addition to cash awards, C3.ai DTI recipients will be provided with significant cloud computing, supercomputing, data access, and AI software resources and technical support provided by Microsoft and C3.ai. This will include unlimited use of the C3 AI Suite and access to the Microsoft Azure cloud platform and access to the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) at UIUC.
“We are collecting a massive amount of data about MERS, SARS, and now COVID-19,” said Condoleezza Rice, former US Secretary of State. “We have a unique opportunity before us to apply the new sciences of AI and digital transformation to learn from these data how we can better manage these phenomena and avert the worst outcomes for humanity,” Rice continued. “I can think of no work more important and no response more cogent and timely than this important public-private partnership.”
“We’re excited about the C3.ai Digital Transformation Institute and are happy to join on a shared mission to accelerate research at these eminent research institutions,” said Eric Horvitz, Chief Scientist at Microsoft and C3.ai DTI Advisory Board Member. “As we launch this exciting private-public partnership, we’re enthusiastic about aiming the broader goals of the Institute at urgent challenges with the COVID-19 pandemic, as well as on longer-term research that could help to minimize future pandemics.”
"At UC Berkeley, we are thrilled to help co-lead this important endeavor to establish and advance the science of digital transformation at the nexus of machine learning, IoT, and cloud computing,” said Carol Christ, Chancellor, UC Berkeley. “We believe this Institute has the potential to make tremendous contributions by including ethics, new business models, and public policy to the technologies for transforming societal scale systems globally."
“The C3.ai Digital Transformation Institute, with its vision of cross-institutional and multi-disciplinary collaboration, represents an exciting model to help accelerate innovation in this important new field of study,” said Robert J. Jones, Chancellor of the University of Illinois at Urbana-Champaign. “At this time of a global health crisis, the Institute’s initial research focus will be on applying AI to mitigate the COVID-19 pandemic and to learn from it how to protect the world from future pandemics. C3.ai DTI is an important addition to the world’s fight against this disease and a powerful new resource in developing solutions to all societal challenges.”
“Together with the other C3.ai Digital Transformation Institute partners, we look forward to creating a powerful ecosystem of scholars and educators committed to applying 21st century technologies to the benefit of all,” said Chris Eisgruber, President of Princeton University. “This public-private partnership with innovators like C3.ai and Microsoft, providing support to world-class researchers across a range of disciplines, promises to bring rapid innovation to an exciting new frontier.”
“By strongly supporting multidisciplinary research and multi-institution projects, the C3.ai DTI represents a new avenue to develop breakthrough scientific results with a positive impact on society at a time of great need,” said Robert Zimmer, President of the University of Chicago. “I’m very pleased that the University of Chicago is part of this formidable collaboration between academia and industry to lead crucial innovation with great purpose and urgency.”
“The vision of C3.ai DTI is driven by the recognition of digital transformation as both a science as well as a scientific imperative for this pivotal time, applicable to every sector of our economy across the public and private sectors, including in healthcare, education, and public health,” said Farnam Jahanian, President of Carnegie Mellon University. “We are excited to participate in building out the Institute’s structure, program and further alliances. This is just the beginning of an ambitious journey that can have enormous positive impact on the world.”
"At MIT, we share the commitment of C3.ai DTI to advancing the frontiers of AI, cybersecurity and related fields while building into every inquiry a deep concern for ethics, privacy, equity and the public interest,” said Rafael Reif, President of the Massachusetts Institute of Technology. “At this moment of national emergency, we are proud to be part of this intensive effort to apply these sophisticated tools to better analyze the COVID-19 epidemic and devise effective ways to stop it. We look forward to accelerating this work both by collaborating with the companies and institutions in the initiative, and by drawing on the frontline experience and clinical data of our colleagues in Boston's world-class hospitals."


Building Community
At the heart of C3.ai DTI will be the constant flow of new ideas and expertise provided by ongoing research, visiting professors and research scholars, and faculty and scholars in residence, many of whom will come from beyond the member institutions. This rich ecosystem will form the foundational structure of a new Science of Digital Transformation.
“This is about global innovation based on multinational collaboration to accelerate the positive impact of AI by providing researchers access to real world data and to massive resources,” said Jim Snabe, Chairman, Siemens. “This is exactly the kind of multinational public-private partnership that is required to address this critical issue.”
“I could not be more proud of our association with C3.ai and Microsoft,” said Lorenzo Simonelli, CEO of Baker Hughes. “This is exactly the kind of leadership that is required to bring together the best of us to address this critical need.”
“We are at war and we must win it! Using all means,” said Jacques Attali, French statesman. “This great project will organize global scientific collaboration for accelerating the social impact of AI, and help to win this war, using new weapons, for the best of mankind.”
“In these difficult times, we need – now more than ever – to join our forces with scholars, innovators, and industry experts to propose solutions to complex problems. I am convinced that digital, data science and AI are a key answer,” said Gwenaëlle Avice-Huet, Executive Vice President of ENGIE. “The C3.ai Digital Transformation Institute is a perfect example of what we can do together to make the world better.”


Establishing the New Science of Digital Transformation
C3.ai DTI will focus its research on AI, Machine Learning, IoT, Big Data Analytics, human factors, organizational behavior, ethics, and public policy. The Institute will support the development of ML algorithms, data security, and cybersecurity techniques. C3.ai DTI research will analyze new business operation models, develop methods of implementing organizational change management and protecting privacy, and amplify the dialogue around the ethics and public policy of AI.
C3.ai Digital Transformation Institute is a Research Initiative that Includes:
  • Research Awards: Up to 26 cash awards annually, ranging from $100,000 to $500,000 each
  • Computing Resources: Access to free Azure Cloud and C3 AI Suite resources
  • Visiting Professors & Research Scientists: $750,000 per year to support C3.ai DTI Visiting Scholars
  • Curriculum Development: Annual awards to faculty at member institutions to develop curricula that teach the emerging field of Digital Transformation Science
  • Data Analytics Platform: C3.ai DTI will host an elastic cloud, big data, development, and operating platform, including the C3 AI Suite hosted on Microsoft Azure for the purpose of supporting C3.ai DTI research, curriculum development, and teaching.
  • Educational Program: $750,000 a year to support an annual conference, annual report, newsletters, published research, and website
  • Industry Alignment: C3.ai DTI Industry Partners will be established to assure the institute’s operations are aligned to the needs of the private sector.
  • Open Source: C3.ai DTI will strongly favor proposals that promise to publish their research in the public domain.
To support the Institute, C3.ai will provide C3.ai DTI $57,250,000 in cash contributions over the first five years of operation. C3.ai and Microsoft will contribute an additional $310 million in-kind, including use of the C3 AI Suite and Microsoft Azure computing, storage, and technical resources to support C3.ai DTI research.
To learn more about C3.ai DTI’s program, award opportunities, and call for proposals, please visit C3DTI.ai.

About C3.ai Digital Transformation Institute
C3.ai Digital Transformation Institute represents an innovative vision to take AI, ML, IoT, and big data research in a consortium model to a level that cannot be achieved at any one institution alone. Jointly managed and hosted by the University of California, Berkeley and the University of Illinois at Urbana-Champaign, C3.ai DTI will attract the world’s leading scientists to join in a coordinated and innovative effort to advance the digital transformation of business, government, and society, and establish the new Science of the Digital Transformation of Societal Systems.
About C3.ai
C3.ai is a leading AI software provider for accelerating digital transformation. C3.ai delivers the C3 AI Suite for developing, deploying, and operating large-scale AI, predictive analytics, and IoT applications in addition to an increasingly broad portfolio of turn-key AI applications. The core of the C3.ai offering is a revolutionary, model-driven AI architecture that dramatically enhances data science and application development. Learn more at: www.c3.ai.