Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

AI-Search & Rescue Defense project at Sea


Australian defense project tests potential of AI at sea


The Australian Department of Defense is conducting trails to test the potential of an artificial intelligence (AI) system in the AI-Search project at sea.

Tests conducted in the search-and-rescue (SAR) trials as part of the project will recognize the potential of AI to augment and enhance SAR and to save lives at sea.

The project is conducted in collaboration with Warfare Innovation Navy Branch, Plan Jericho, the Royal Australian Air Force (RAAF) Air Mobility Group’s No 35 Squadron, and the University of Tasmania’s Australian Maritime College.



Modern AI is used for the detection of small and difficult-to-spot targets, including life rafts and individual survivors.

Plan Jericho AI lead wing commander Michael Gan said: “The idea was to train a machine-learning algorithm and AI sensors to complement existing visual search techniques.

“Our vision was to give any aircraft and other defense platforms, including unmanned aerial systems, a low-cost, improvised SAR capability.”

A series of new machine-learning algorithms were developed for the AI. Deterministic processes to analyse the imagery collected by camera sensors and aid human observers were also used.



Last year saw the first successful trial conducted aboard a RAAF C-27J Spartan. The second trial was performed last month near Stradbroke Island, Queensland.

During these trials, a range of small targets were detected in a wide sea area while training the algorithm as part of the project.

The trials highlighted the feasibility of the technology and its easy integration into other Australian Defense Forces (ADF) airborne platforms.

Warfare Innovation Navy Branch lieutenant Harry Hubbert said: “There is a lot of discussion about AI in Defense but the sheer processing power of machine-learning applied to SAR has the potential to save lives and transform it.”


Artists explore AI


Adjusting to technological developments is not a new concept for the art world.

Wood panels were once the standard for paintings, but by the 17th century they were largely overtaken by canvas, and the paint itself changed, too. Video art, a mainstay now, was a new phenomenon in the 1960s.

More recently, augmented reality and virtual reality have captured the imagination of artists as ways to tell stories that we could not have imagined even 20 years ago.


"They Took the Faces From the Accused and the Dead … (SD18)," a grid of 3,240 mug shots, by Trevor Paglen. It is part of the show "Uncanny Valley: Being Human in the Age of AI" at the de Young Museum in San Francisco.




But the rise of artificial intelligence in art, a phenomenon in recent years, has a different cast to it. Not only is AI a tool for artists, who are employing machine intelligence in fascinating ways, it is also frequently a topic to be examined — sometimes in the same piece.

And underlying many of the works is a deep unease. As Lisa Phillips, the director of New York’s New Museum, put it, the worries come down to “the prospect that machines are going to take over.” She added, “What are we unleashing?”

Even the art market was alerted to a new realm when an AI-generated portrait that was initiated by the Paris-based art collective Obvious was sold for $432,500 at Christie’s in 2018. It was like a traditional portrait of a man, but his features were smudged and blurry.

Museums and other exhibition spaces have also produced a flurry of current and coming shows involving AI that were scheduled for this spring, some of them delayed after closings because of the coronavirus pandemic.

They include a survey of the subject, “Uncanny Valley: Being Human in the Age of AI,” at the de Young Museum in San Francisco scheduled through Oct. 25; “Future Sketches,” which was on view earlier this year at Artechouse Washington and is intended to move to Artechouse’s Miami space later this year; Trevor Paglen’s photography at the Altman-Siegel Gallery in San Francisco; and “Ed Atkins: Get Life/Love’s Work” scheduled for the New Museum from June 24 to Sept. 27.

Paglen is one of the best-known artists in the AI territory. His work on it, and on the subject of state surveillance, helped him win a John D. and Catherine T. MacArthur Foundation fellowship (the “genius” grant) in 2017.

“I’ve been working on it for a while,” Paglen said.

“Once I started thinking about it, I haven’t stopped.” He is based in New York, where he has two of his three studios; the other is in Berlin.

His work at Altman-Siegel tries to connect the surveying of the American West in the 19th century with the way computers perceive the world via the data they are given — how what is officially “seen” creates power dynamics.

Paglen has a work in “Uncanny Valley,” too, called “They Took the Faces From the Accused and the Dead … (SD18),” a grid of 3,240 mug shots, used without the subjects’ consent, from the American National Standards Institute, a nonprofit group founded in 1918 that helps set agreed-upon standards across industries, including a wide array of tech fields.

The images were used to train facial-recognition programs, and Paglen uses them to question “how is data weaponized,” he said.

It has been a theme for other artists, too: Because machines have to be trained by people, what implicit biases are being passed on along the way?

“We live in a world in which things are being sorted into categories that are not inherent in nature,” Paglen said.

In addition to critiquing AI, Paglen has used it to create art. For his 2017 series “Adversarially Evolved Hallucinations,” he created an AI system that made a series of images.

“I was making my own training sets,” he said. “I built the taxonomies from scratch.” The resulting works, including a view of what a computer thinks a man looks like, may strike some as a bit spooky.

The organizer of “Uncanny Valley,” Claudia Schmuckli — the chief contemporary curator at the Fine Arts Museums of San Francisco, which includes the de Young — said that in her view, the overall tone of the works in “Uncanny Valley,” which features the work of 14 artists or collectives, was one of “concern, rather than anxiety.”

“A lot of the works in this show look at AI as an applied form of machine learning, how it actually works, not the speculative fantasy of AI,” she said. “It may be that not a lot of deep thinking has occurred about the potential consequences in the long run.”

Schmuckli moved from Houston to San Francisco in 2016, and she said it was partly the postelection revelations about hacking, Facebook and the data firm Cambridge Analytica that got her thinking. “I felt like this was an area I needed to urgently understand,” she said.

In the tech-focused Bay Area, the show has hit a nerve.

“The turnout for the opening was wholly amazing,” Schmuckli said. “We saw a lot of people who have never stepped foot in this museum before.”

The for-profit exhibition space Artechouse, with branches in New York, Washington and Miami, focuses exclusively on the nexus of art and tech, as its name suggests.

“We thought it was a niche that needed to be filled,” said co-founder Tati Pastukhova. Since its founding in 2017, about half of its shows have touched on AI in some way.

The latest such exhibition, “Future Sketches,” is a collaboration with Zach Lieberman, an artist who is also an adjunct associate professor at MIT’s Media Lab (his university bio also calls him a “hacker”).

Perhaps befitting a full-time techie, his work has a more positive spin than that of some others working with AI. His Artechouse piece “Expression Mirror,” originally created for the 2018 London Design Biennale, reads the facial expressions of a user, tracking muscle movements at 68 points on the face.

But when people look at the “mirror,” they do not see themselves. “Your face is replaced with someone’s face who has used it before,” Lieberman said. “It matches your expressions, like a smile or frown, and it learns as it interacts.” He calls this a “face action coding system,” a version of a “fingerprint.”

Lieberman said he understood why some artists plumbed the dark side of AI, because of its long-term implications and because anything to do with machines could unsettle.

“It’s this black box that you feed things into,” he said. “It’s inscrutable in some way.”

But Lieberman said he encouraged a diversity of views on matters technological. “I think it’s important to create artworks for the public to have all kinds of conversations — be they critical or playful or anything else.”

Artechouse’s other founder, Sandro Kereselidze, struck a similar note.

“Everything in the world has a positive and negative side,” he said, adding that “it’s in our power” to explore both sides of AI.

“As long as we can find the off button on the computer.”

Posted by Jai Ponnappan 


Coronavirus treatment trial uses AI to speed results

The first hospital network in the U.S. has joined an international clinical trial using artificial intelligence to help determine which treatments for patients with the novel coronavirus are most effective on an on-going basis.




Why it matters: In the midst of a pandemic, scientists face dueling needs: to find treatments quickly and to ensure they are safe and effective. By using this new type of adaptive platform, doctors hope to collect clinical data that will help more quickly determine what actually works.
“The solution is to find an optimal trade-off between doing something now, such as prescribing a drug off-label, or waiting until traditional clinical trials are complete.”
— Derek Angus, senior trial investigator and professor at University of Pittsburgh School of Medicine, told a press briefing
State of play: No treatments have been approved for COVID-19 yet. Researchers have made headway in mapping how the virus attaches and infects human cells — helping "guide drug developers, atom by atom, in devising safe and effective ways to treat COVID-19," National Institutes of Health director Francis Collins writes.
  • But new drugs take a long time to develop, partly because they must first be tested for safety before broadening to test for safety and efficacy.
  • While many companies are working on new treatments, others have focused on testing drugs for other conditions that have already met safety requirements.
What's new: The University of Pittsburgh Medical Center (UPMC) is the first American hospital system to join an international treatment trial called REMAP-COVID19, which is enrolling patients with COVID-19 in North America, Europe, Australia and New Zealand so far.


How it works: Starting Thursday, UPMC's system of 40 hospitals began offering the trial to patients who have moderate to severe complications from COVID-19, Angus said.
  • Patients in the trial will receive their current standard of care. About 12.5% will receive placebo at the launch and the rest will be randomly selected to multiple interventions with one or more antibiotics, antivirals, steroids, and medicines that regulate the immune system, including the drug hydroxychloroquine.
  • The platform, based on an existing one called REMAP-CAP, is integrated with UPMC's electronic health records and the data collected via a worldwide machine-learning system that continuously determines what combination of therapies is performing best.
  • As more data is collected, more patients will be steered toward the therapies doing well, Angus said.
  • The adaptive trial format, published Thursday in the journal Annals of the American Thoracic Society, can allow new treatments to be rolled into the trial.
"This idea came to us after the H1N1 [epidemic], when everyone scrambled to do traditional trials" but by the time those were established, the outbreak had moved on, Angus said. "We asked, how we can do this better."
The big picture: There are more than 400 listed clinical trials for treatments, therapies and vaccines related to COVID-19.



Healthy skepticism of Artificial Intelligence & Coronavirus


The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemic’s spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AI’s value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.




1. LOOK TO THE SUBJECT-MATTER EXPERTS

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptic’s approach to AI claims.
No matter what the topic, AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.
In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.
“There is no value in AI without subject-matter expertise.”
It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the “AI that predicted coronavirus” and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.


2. AI NEEDS LOTS OF DATA

The set of algorithms that conquered Go, a strategy board game, and “Jeopardy!” have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical “Jeopardy!” questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone can’t predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu)
To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have fevers—the outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.


3. DON’T TRUST AI’S ACCURACY

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.
However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.
“[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.”
In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be true—but the model would still be missing 40% of the infected people. This is why it’s important to also know the model’s “sensitivity,” which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.
Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that “the findings on chest imaging in COVID-19 are not specific, and overlap with other infections,” and that it should not be used as a “first-line test to diagnose COVID-19.” Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflows—especially not during a pandemic.


4. REAL-WORLD DEPLOYMENT DEGRADES AI PERFORMANCE

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.
In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.
“When AI models leave development and start making real-world predictions, they nearly always degrade in performance.”
This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its own—a second confirmatory test with an accurate thermometer is required.


5. MOST PREDICTIONS MUST ENABLE AN INTERVENTION TO REALLY MATTER

In high-stakes applications of AI, it typically requires a prediction that isn’t just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.
With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.
This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.


6. AI IS FAR BETTER AT MINUTE DETAILS THAN BIG, RARE EVENTS

Wired ran a piece in January titled “An AI Epidemiologist Sent the First Warnings of the Wuhan Virus” about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak “before it happened.” However, this isn’t really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.
That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Children’s Hospital aren’t unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesn’t come from skepticism about the feasibility of the application, but rather the specific type of value it brings.
“AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.”
Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AI’s reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrum—providing minute detail.
For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDot’s work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.


7. THERE WILL BE UNINTENDED CONSEQUENCES

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that person’s travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.
Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of faces—and it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.


8. DON’T FORGET: AI WILL BE BIASED

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.
While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.
“An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.”
For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at risk—a noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that don’t represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.
Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptic’s certainty that they remain hidden, unresolved, and pernicious.


THE FUTURE OF AI SYSTEMS IS MORE PROMISING

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.
Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.



Analyzing Corona Virus data with AI



Fully understanding and solving the coronavirus pandemic will be about the data. There’s no shortage of data sources that are growing hourly. Now nine organizations, business and academic, have formed a coalition to bring coronavirus data sources together, and added incentives for researchers who can apply modern data analysis and artificial intelligence to it. Leading this effort is the Silicon Valley company C3.ai

C3.ai, Microsoft Corporation, the University of Illinois at Urbana-Champaign (UIUC), the University of California, Berkeley, Princeton University, the University of Chicago, the Massachusetts Institute of Technology, Carnegie Mellon University, and the National Center for Supercomputing Applications at UIUC announced two major initiatives:

  • C3.ai Digital Transformation Institute (C3.ai DTI), a research consortium dedicated to accelerating the application of artificial intelligence to speed the pace of digital transformation in business, government, and society. Jointly managed by UC Berkeley and UIUC, C3.ai DTI will sponsor and fund world-leading scientists in a coordinated effort to advance the digital transformation of business, government, and society.
  • C3.ai DTI First Call for Research Proposals: C3.ai DTI invites scholars, developers, and researchers to embrace the challenge of abating COVID-19 and advance the knowledge, science, and technologies for mitigating future pandemics using AI. This is the first in what will be a series of bi-annual calls for Digital Transformation research proposals.
“The C3.ai Digital Transformation Institute is a consortium of leading scientists, researchers, innovators, and executives from academia and industry, joining forces to accelerate the social and economic benefits of digital transformation,” said Thomas M. Siebel, CEO of C3.ai. “We have the opportunity through public-private partnership to change the course of a global pandemic,” Siebel continued. “I cannot imagine a more important use of AI.”

Immediate Call for Proposals: AI Techniques to Mitigate Pandemic
Topics for Research Awards may include but are not limited to the following:
  1. Applying machine learning and other AI methods to mitigate the spread of the COVID-19 pandemic
  2. Genome-specific COVID-19 medical protocols, including precision medicine of host responses
  3. Biomedical informatics methods for drug design and repurposing
  4. Design and sharing of clinical trials for collecting data on medications, therapies, and interventions
  5. Modeling, simulation, and prediction for understanding COVID-19 propagation and efficacy of interventions
  6. Logistics and optimization analysis for design of public health strategies and interventions
  7. Rigorous approaches to designing sampling and testing strategies
  8. Data analytics for COVID-19 research harnessing private and sensitive data
  9. Improving societal resilience in response to the spread of the COVID-19 pandemic
  10. Broader efforts in biomedicine, infectious disease modeling, response logistics and optimization, public health efforts, tools, and methodologies around the containment of rising infectious diseases and response to pandemics, so as to be better prepared for future infectious diseases
The first call for proposals is open now, with a deadline of May 1, 2020. Researchers are invited to learn more about C3.ai DTI and how to submit their proposals for consideration at C3DTI.ai. Selected proposals will be announced by June 1, 2020.
Up to $5.8 million in awards will be funded from this first call, ranging from $100,000 to $500,000 each. In addition to cash awards, C3.ai DTI recipients will be provided with significant cloud computing, supercomputing, data access, and AI software resources and technical support provided by Microsoft and C3.ai. This will include unlimited use of the C3 AI Suite and access to the Microsoft Azure cloud platform and access to the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) at UIUC.
“We are collecting a massive amount of data about MERS, SARS, and now COVID-19,” said Condoleezza Rice, former US Secretary of State. “We have a unique opportunity before us to apply the new sciences of AI and digital transformation to learn from these data how we can better manage these phenomena and avert the worst outcomes for humanity,” Rice continued. “I can think of no work more important and no response more cogent and timely than this important public-private partnership.”
“We’re excited about the C3.ai Digital Transformation Institute and are happy to join on a shared mission to accelerate research at these eminent research institutions,” said Eric Horvitz, Chief Scientist at Microsoft and C3.ai DTI Advisory Board Member. “As we launch this exciting private-public partnership, we’re enthusiastic about aiming the broader goals of the Institute at urgent challenges with the COVID-19 pandemic, as well as on longer-term research that could help to minimize future pandemics.”
"At UC Berkeley, we are thrilled to help co-lead this important endeavor to establish and advance the science of digital transformation at the nexus of machine learning, IoT, and cloud computing,” said Carol Christ, Chancellor, UC Berkeley. “We believe this Institute has the potential to make tremendous contributions by including ethics, new business models, and public policy to the technologies for transforming societal scale systems globally."
“The C3.ai Digital Transformation Institute, with its vision of cross-institutional and multi-disciplinary collaboration, represents an exciting model to help accelerate innovation in this important new field of study,” said Robert J. Jones, Chancellor of the University of Illinois at Urbana-Champaign. “At this time of a global health crisis, the Institute’s initial research focus will be on applying AI to mitigate the COVID-19 pandemic and to learn from it how to protect the world from future pandemics. C3.ai DTI is an important addition to the world’s fight against this disease and a powerful new resource in developing solutions to all societal challenges.”
“Together with the other C3.ai Digital Transformation Institute partners, we look forward to creating a powerful ecosystem of scholars and educators committed to applying 21st century technologies to the benefit of all,” said Chris Eisgruber, President of Princeton University. “This public-private partnership with innovators like C3.ai and Microsoft, providing support to world-class researchers across a range of disciplines, promises to bring rapid innovation to an exciting new frontier.”
“By strongly supporting multidisciplinary research and multi-institution projects, the C3.ai DTI represents a new avenue to develop breakthrough scientific results with a positive impact on society at a time of great need,” said Robert Zimmer, President of the University of Chicago. “I’m very pleased that the University of Chicago is part of this formidable collaboration between academia and industry to lead crucial innovation with great purpose and urgency.”
“The vision of C3.ai DTI is driven by the recognition of digital transformation as both a science as well as a scientific imperative for this pivotal time, applicable to every sector of our economy across the public and private sectors, including in healthcare, education, and public health,” said Farnam Jahanian, President of Carnegie Mellon University. “We are excited to participate in building out the Institute’s structure, program and further alliances. This is just the beginning of an ambitious journey that can have enormous positive impact on the world.”
"At MIT, we share the commitment of C3.ai DTI to advancing the frontiers of AI, cybersecurity and related fields while building into every inquiry a deep concern for ethics, privacy, equity and the public interest,” said Rafael Reif, President of the Massachusetts Institute of Technology. “At this moment of national emergency, we are proud to be part of this intensive effort to apply these sophisticated tools to better analyze the COVID-19 epidemic and devise effective ways to stop it. We look forward to accelerating this work both by collaborating with the companies and institutions in the initiative, and by drawing on the frontline experience and clinical data of our colleagues in Boston's world-class hospitals."


Building Community
At the heart of C3.ai DTI will be the constant flow of new ideas and expertise provided by ongoing research, visiting professors and research scholars, and faculty and scholars in residence, many of whom will come from beyond the member institutions. This rich ecosystem will form the foundational structure of a new Science of Digital Transformation.
“This is about global innovation based on multinational collaboration to accelerate the positive impact of AI by providing researchers access to real world data and to massive resources,” said Jim Snabe, Chairman, Siemens. “This is exactly the kind of multinational public-private partnership that is required to address this critical issue.”
“I could not be more proud of our association with C3.ai and Microsoft,” said Lorenzo Simonelli, CEO of Baker Hughes. “This is exactly the kind of leadership that is required to bring together the best of us to address this critical need.”
“We are at war and we must win it! Using all means,” said Jacques Attali, French statesman. “This great project will organize global scientific collaboration for accelerating the social impact of AI, and help to win this war, using new weapons, for the best of mankind.”
“In these difficult times, we need – now more than ever – to join our forces with scholars, innovators, and industry experts to propose solutions to complex problems. I am convinced that digital, data science and AI are a key answer,” said Gwenaëlle Avice-Huet, Executive Vice President of ENGIE. “The C3.ai Digital Transformation Institute is a perfect example of what we can do together to make the world better.”


Establishing the New Science of Digital Transformation
C3.ai DTI will focus its research on AI, Machine Learning, IoT, Big Data Analytics, human factors, organizational behavior, ethics, and public policy. The Institute will support the development of ML algorithms, data security, and cybersecurity techniques. C3.ai DTI research will analyze new business operation models, develop methods of implementing organizational change management and protecting privacy, and amplify the dialogue around the ethics and public policy of AI.
C3.ai Digital Transformation Institute is a Research Initiative that Includes:
  • Research Awards: Up to 26 cash awards annually, ranging from $100,000 to $500,000 each
  • Computing Resources: Access to free Azure Cloud and C3 AI Suite resources
  • Visiting Professors & Research Scientists: $750,000 per year to support C3.ai DTI Visiting Scholars
  • Curriculum Development: Annual awards to faculty at member institutions to develop curricula that teach the emerging field of Digital Transformation Science
  • Data Analytics Platform: C3.ai DTI will host an elastic cloud, big data, development, and operating platform, including the C3 AI Suite hosted on Microsoft Azure for the purpose of supporting C3.ai DTI research, curriculum development, and teaching.
  • Educational Program: $750,000 a year to support an annual conference, annual report, newsletters, published research, and website
  • Industry Alignment: C3.ai DTI Industry Partners will be established to assure the institute’s operations are aligned to the needs of the private sector.
  • Open Source: C3.ai DTI will strongly favor proposals that promise to publish their research in the public domain.
To support the Institute, C3.ai will provide C3.ai DTI $57,250,000 in cash contributions over the first five years of operation. C3.ai and Microsoft will contribute an additional $310 million in-kind, including use of the C3 AI Suite and Microsoft Azure computing, storage, and technical resources to support C3.ai DTI research.
To learn more about C3.ai DTI’s program, award opportunities, and call for proposals, please visit C3DTI.ai.

About C3.ai Digital Transformation Institute
C3.ai Digital Transformation Institute represents an innovative vision to take AI, ML, IoT, and big data research in a consortium model to a level that cannot be achieved at any one institution alone. Jointly managed and hosted by the University of California, Berkeley and the University of Illinois at Urbana-Champaign, C3.ai DTI will attract the world’s leading scientists to join in a coordinated and innovative effort to advance the digital transformation of business, government, and society, and establish the new Science of the Digital Transformation of Societal Systems.
About C3.ai
C3.ai is a leading AI software provider for accelerating digital transformation. C3.ai delivers the C3 AI Suite for developing, deploying, and operating large-scale AI, predictive analytics, and IoT applications in addition to an increasingly broad portfolio of turn-key AI applications. The core of the C3.ai offering is a revolutionary, model-driven AI architecture that dramatically enhances data science and application development. Learn more at: www.c3.ai.


Use of Artificial Intelligence during the COVID19 Pandemic




Here are some of the projects using AI to address the coronavirus outbreak:



AI in Drug Discovery




A number of research projects are using AI to identify drugs that were developed to fight other diseases but which could now be repurposed to take on coronavirus. By studying the molecular setup of existing drugs with AI, companies want to identify which ones might disrupt the way COVID-19 works. 

BenevolentAI, a London-based drug-discovery company, began turning its attentions towards the coronavirus problem in late January. The company's AI-powered knowledge graph can digest large volumes of scientific literature and biomedical research to find links between the genetic and biological properties of diseases and the composition and action of drugs. 


The company had previously been focused on chronic disease, rather than infections, but was able to retool the system to work on COVID-19 by feeding it the latest research on the virus. "Because of the amount of data that's being produced about COVID-19 and the capabilities we have in being able to machine-read large amounts of documents at scale, we were able to adapt [the knowledge graph] so to take into account the kinds of concepts that are more important in biology, as well as the latest information about COVID-19 itself," says Olly Oechsle, lead software engineer at BenevolentAI. 

While a large body of biomedical research has built up around chronic diseases over decades, COVID-19 only has a few months' worth of studies attached to it. But researchers can use the information that they have to track down other viruses with similar elements, see how they function, and then work out which drugs could be used to inhibit the virus. 

"The infection process of COVID-19 was identified relatively early on. It was found that the virus binds to a particular protein on the surface of cells called ACE2. And what we could with do with our knowledge graph is to look at the processes surrounding that entry of the virus and its replication, rather than anything specific in COVID-19 itself. That allows us to look back a lot more at the literature that concerns different coronaviruses, including SARS, etc. and all of the kinds of biology that goes on in that process of viruses being taken in cells," Oechsle says. 

The system suggested a number of compounds that could potentially have an effect on COVID-19 including, most promisingly, a drug called Baricitinib. The drug is already licensed to treat rheumatoid arthritis. The properties of Baricitinib mean that it could potentially slow down the process of the virus being taken up into cells and reduce its ability to infect lung cells. More research and human trials will be needed to see whether the drug has the effects AI predicts.




Shedding light on the structure of COVID-19


Human epidemiologists at ProMed, an infectious-disease-reporting group, published their own alert just half an hour after HealthMap, and Brownstein also acknowledged the importance of human virologists in studying the spread of the outbreak. 
"What we quickly realised was that as much it's easy to scrape the web to create a really detailed line list of cases around the world, you need an army of people, it can't just be done through machine learning and webscraping," he said. HealthMap also drew on the expertise of researchers from universities across the world, using "official and unofficial sources" to feed into the line list
The data generated by HealthMap has been made public, to be combed through by scientists and researchers looking for links between the disease and certain populations, as well as containment measures. The data has already been combined with data on human movements, gleaned from Baidu, to see how population mobility and control measures affected the spread of the virus in China. 
HealthMap has continued to track the spread of coronavirus throughout the outbreak, visualising its spread across the world by time and location




Spotting signs of a COVID-19 infection in medical images


Canadian startup DarwinAI has developed a neural network that can screen X-rays for signs of COVID-19 infection. While using swabs from patients is the default for testing for coronavirus, analysing chest X-rays could offer an alternative to hospitals that don't have enough staff or testing kits to process all their patients quickly.
DarwinAI released COVID-Net as an open-source system, and "the response has just been overwhelming", says DarwinAI CEO Sheldon Fernandez. More datasets of X-rays were contributed to train the system, which has now learnt from over 17,000 images, while researchers from Indonesia, Turkey, India and other countries are all now working on COVID-19. "Once you put it out there, you have 100 eyes on it very quickly, and they'll very quickly give you some low-hanging fruit on ways to make it better," Fernandez said.
The company is now working on turning COVID-Net from a technical implementation to a system that can be used by healthcare workers. It's also now developing a neural network for risk-stratifying patients that have contracted COVID-19 as a way of separating those with the virus who might be better suited to recovering at home in self-isolation, and those who would be better coming into hospital. 




Monitoring how the virus and lockdown is affecting mental health


Johannes Eichstaedt, assistant professor in Stanford University's department of psychology, has been examining Twitter posts to estimate how COVID-19, and the changes that it's brought to the way we live our lives, is affecting our mental health. 
Using AI-driven text analysis, Eichstaedt queried over two million tweets hashtagged with COVID-related terms during February and March, and combined it with other datasets on relevant factors including the number of cases, deaths, demographics and more, to illuminate the virus' effects on mental health.
The analysis showed that much of the COVID-19-related chat in urban areas was centred on adapting to living with, and preventing the spread of, the infection. Rural areas discussed adapting far less, which the psychologist attributed to the relative prevalence of the disease in urban areas compared to rural, meaning those in the country have had less exposure to the disease and its consequences.
There are also differences in how the young and old are discussing COVID-19. "In older counties across the US, there's talk about Trump and the economic impact, whereas in young counties, it's much more problem-focused coping; the one language cluster that stand out there is that in counties that are younger, people talk about washing their hands," Eichstaedt said.
"We really need to measure the wellbeing impact of COVID-19, and we very quickly need to think about scalable mental healthcare and now is the time to mobilise resources to make that happen," Eichstaedt told the Stanford virtual conference. 





Forecasting how coronavirus cases and deaths will spread across cities – and why


Google-owned machine-learning community Kaggle is setting a number of COVID-19-related challenges to its members, including forecasting the number of cases and fatalities by city as a way of identifying exactly why some places are hit worse than others. 


"The goal here isn't to build another epidemiological model… there are lots of good epidemiological models out there. Actually, the reason we have launched this challenge is to encourage our community to play with the data and try and pick apart the factors that are driving difference in transmission rates across cities," Kaggle's CEO Anthony Goldbloom told the Stanford conference.
Currently, the community is working on a dataset of infections in 163 countries from two months of this year to develop models and interrogate the data for factors that predict spread. 
Most of the community's models have been producing feature-importance plots to show which elements may be contributing to the differences in cases and fatalities. So far, said Goldbloom, latitude and longitude are showing up as having a bearing on COVID-19 spread. The next generation of machine-learning-driven feature-importance plots will tease out the real reasons for geographical variances. 
"It's not the country that is the reason that transmission rates are different in different countries; rather, it's the policies in that country, or it's the cultural norms around hugging and kissing, or it's the temperature. We expect that as people iterate on their models, they'll bring in more granular datasets and we'll start to see these variable-importance plots becoming much more interesting and starting to pick apart the most important factors driving differences in transmission rates across different cities. This is one to watch," Goldbloom added.

~ Jai Krishna Ponnappan