AI Powered Custom COVID 19 Healthcare Bot


The CDC COVID19 Healthcare bot 


The CDC's COVID-19 bot is meant to quickly assess symptoms and risk factors and suggest a next course of action (like see a doctor or just stay home). Microsoft's Healthcare Bot runs on Azure and was first made publicly available in February 2019.

The bot service originally  began as a research project in 2017. That bot service allows users to create chat bots and AI-powered health assistants using Microsoft's service. 

The Healthcare Bot service can integrate with Electronic Health Records. In addition to the CDC, customers using this service to build their own bots include Quest Diagnostics and Kaiser Permanente. Providence St. Joseph Health also is using a COVID-19 screening service built using Microsoft's Healthcare bot technology.

Microsoft's AI powered Healthcare bot service is meant to guide customers via a natural conversation experience. It is customizable so that it can fit in with an organization's own scenarios and protocols.

In addition to the underlying bot service, several customizable COVID-19 response templates have also been made available. These include a COVID-19 risk assessment based on CDC guidelines; COVID-19 clinical triage based on CDC protocols; COVID-19 answers to frequently asked questions; and COVID-19 worldwide metrics.





An Overview of the Microsoft Healthcare Bot




Conversational AI for Healthcare: A cloud service that empowers healthcare organizations to build and deploy, AI-powered virtual health assistants and chatbots that can be used to enhance their processes, self-service, and cost reduction efforts.

Built-in healthcare intelligence: The Healthcare Bot comes with built-in healthcare AI services, including a symptom checker and medical content from known industry resources, and language understanding models that are tuned to understand medical and clinical terminology.

Customizable: You will receive your own white-labeled bot instance that can be embedded within your app or website. You can customize the built-in functionality and extend to introduce your own business flows through simple and intuitive visual editing tools.

Compliance: The service aligns to industry and globally recognized security and compliance standards such as ISO 27001, 27018, and CSA Gold and GDPR and provides tools that help our partners create HIPAA compliant solutions. 

Out-of-the-box AI and world knowledge capabilities: While each health bot instance is highly customizable and extensible, the Health Bot Service is built with a wide range of out-of-the-box features. 
  • The Health Bot Service leverages information from respected healthcare industry data sources to generate accurate and relevant responses. 
  • The Health Bot Service enables meaningful conversations for patients with an interactive symptom checker and uses medical content databases to answer health questions. 
  • Conversational intelligence supports layperson natural language conversations to flow and adapt dynamically as each health bot instance learns from previous interactions. The service intelligence is powered by Microsoft Cognitive Services and credible world knowledge.



Configurable and extensible:

The Health Bot Service provides endless flexibility of use to Microsoft partners:
  • Unique scenarios can be authored by partners for their health bot instances to extend the baseline scenarios and support their own flows.
  • The health bot instance's behavior can be configured to match the partner's use cases, processes, and scenarios.
  • The health bot instance can easily be connected to partners' information systems---for example, systems that manage EMR, health information, and customer information.
  • The health bot instance can be easily integrated into other systems such as web sites, chat channels, and digital personal assistants.
Security and Privacy: The information handled by each instance of the Health Bot Service is privacy protected to HIPAA standards and secured to the highest standards for privacy and security by Microsoft. 

Built on top of Microsoft Azure technology, the Azure architecture powers the Health Bot Service's ability to scale with resilience, while maintaining the highest standards of privacy and security.

Easy to manage: Each health bot instance is easily managed and monitored by Microsoft partners via the Health Bot Service's management portal and management API. The management portal provides the ability to define the health bot instance's behavior in fine detail and to monitor usage with built-in reports. Management API allows the partner to embed the health bot instance and to securely exchange data and information.



Common Use-case scenarios:

The Health Bot Service contains built-in scenarios. Additional scenarios may be authored through the Scenario Editor.




The built-in scenarios include the following:

  • Triage/symptom checker, powered by built-in medical protocols: The end user describes a symptom to the health bot instance and the bot helps the user to understand it and suggests how to react; for example, "I have a headache."
  • General information about conditions, symptoms, causes, complications, and more: Loaded with medical content, the health bot instance can provide information about medical conditions, symptoms, causes, and complications; for example, "information about diabetes," "what are the causes of malaria," "tell me about the complications of arthritis."
  • Find doctor type: The health bot instance can recommend the appropriate type of doctor to treat an illness; for example, "What type of doctor treats diabetes?"
Examples of scenarios that are typically built by customers as extensions using the scenario authoring elements include the following:

  • Health plan inquiries: Your health bot instance can be customized to access information about health plan details, such as pricing and benefits.
  • Finding providers: Your health bot instance can allow customers to search for doctors by specialty, in-network status, and other specifications.
  • Scheduling appointments: Your health bot instance can be designed to allow your customers to schedule appointments easily and securely.





Deploying your Custom COVID-19 Healthcare Bot 


Public healthcare providers on the frontline of COVID-19 response have had to act quickly to support the sudden spike in inquiries from patients and constituents looking to get answers to a common set of requests such as, 
  • Up-to-date outbreak information, 
  • Symptoms 
  • Risk factors for people worried about infection
  • Suggest a next course of action. 


Many of these providers have expressed concerns with being able to support the volumes of inquiries, and consequently have been using the Microsoft Healthcare Bot to help provide critical information to their patients.

In a nutshell Microsoft’s Healthcare Bot  is a scalable Azure-based SaaS solution that empowers Microsoft customers and partners to build and deploy compliant, AI-powered health agents, allowing them to offer their users intelligent, personalized access to health-related information and interactions through a natural conversation experience. 

It is one solution that uses AI to help the CDC and other frontline organizations to provide help to those who need it.

The Healthcare Bot can easily be customized to suit an organizations scenarios and protocols. 

To assist in the rapid deployment of COVID-19 specific bots Microsoft has made available a set of COVID-19 templates that customers can use and modify:

  • COVID-19 Risk Assessment
  • COVID-19 Frequently Asked Questions
  • COVID-19 Worldwide metrics
  • COVID-19 Clinical Triage

To help you deploy your COVID-19 healthcare bot, Microsoft has created a Reference architecture, deployment template.



Reference Architecture

The reference architecture provides guidance on a High Availability deployment of the Healthcare Bot and associated Azure services across 2 regions.



Note: The architecture can also be deployed in a single region, if you choose to deploy in a single region it is recommended that you model and estimate your peak traffic expectations to ensure that a single region deployment is appropriate for your situation.

Alternate Schematic Representation with Workflow:



Note: 

  • Unless otherwise noted explicitly, the first region listed in the locations parameter (array) will represent the primary region and the second will denote the secondary region.
  • The ARM template parameter name has to be unique for each Health Bot deployment. Use an alpha numberic value for this name parameter. All Azure resources deployed by the ARM template will have names prefixed with this deployment name.
  • Azure Traffic Manager is used to shift the Web Chat Client and QnA Maker API traffic across the individual Azure App Service instances deployed in the two regions. The end user (customer) is responsible for configuring the respective traffic routing algorithm in the Traffic Manager to ensure the traffic is split evenly between the App Service instances as per their requirements.




Deployment Template


To assist in deploying the reference architecture Microsoft has developed an ARM template for you to use. The step by step instruction to deploy and configure the reference architecture can be found here: Deploy Microsoft Health Bot Reference Architecture

To then set up your Health Bot – follow the instruction in the Quick Start: Setting Up Your COVID-19 Health Bot

If you are ready to deploy and would like assistance:  

  1. Contact your account team for a quick demo and/ or alignment of resources.
  2. Speak to one of our Health Bot Partners who can help you deploy and customize your own COVID-19 Health Bot.

Additional Resources:




Bounties to ensure the Ethical use of AI




Remember the human at the heart of the data



A team of AI researchers released a number of proposals regarding ethical AI usage, and they included a suggestion that rewarding people for discovering biases in AI could be an effective way of making AI fairer.

Researchers from a variety of companies throughout the US and Europe joined up to put together a set of ethical guidelines for AI development, as well as suggestions for how to meet the guidelines. 

One of the suggestions the researchers made was offering bounties to developers who find bias within AI programs. 

The suggestion was made in a paper entitled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”.



As examples of the biases that the team of researchers hope to address, biased data and algorithms have been found in everything from healthcare applications to facial recognition systems used by law enforcement.

One such occurrence of bias is the PATTERN risk assessment tool that was recently used by the US Department of Justice to triage prisoners and decide which ones could be sent home when reducing prison population sizes in response to the coronavirus pandemic.


When artificial  intelligence algorithms are biased, they can create unethical results, which in turn can lead to  unfair social outcomes and PR disasters for both businesses and organizations. Reducing and aiming to limit different types of AI bias – algorithmic, technical, and emergent – are critical to the adoption rate and future success of real world production AI systems and implementations.


Every developer (and every person, for that matter) has conscious and unconscious biases that inform the way they approach data collection and the world in general. This can range from the mundane, such as a preference for the colour red over blue, to the more sinister, via the assumption of gender roles, racial profiling, religious and xenophobic prejudices, and historical discrimination.  

The prevalence of bias throughout society means that the training sets of data used by algorithms to learn reflect these assumptions, resulting in decisions which are skewed for or against certain sections of society. This is known as algorithmic bias.











A second possibility of technical bias when developing an algorithm: This occurs when the training data is not reflective of all possible scenarios that the algorithm may encounter when used for life-saving or critical functions. 

In 2016, Tesla’s first known autopilot fatality occurred as a result of the AI being unable to identify the white side of a van against a brightly lit sky, resulting in the autopilot not applying the brakes. This kind of accident highlights the need to provide the algorithm with constant, up-to-date training and data reflecting myriad scenarios, along with the importance of testing in-the-wild, in all kinds of conditions.

Finally, we have emergent bias: This occurs when the algorithm encounters new knowledge, or when there’s a mismatch between the user and the design system. An excellent example of this is Amazon’s Echo smart speaker, which has mistaken countless different words for its wake up cue of “Alexa”, resulting in the device responding and collecting information unasked for. Here, it’s easy to see how incorporating a broader range of dialects, tones, and potential missteps into the training process may have helped to mitigate the issue.


Bringing Humans back to AI Testing  

Whilst companies are increasingly researching methods to spot and mitigate biases, many fail to realize the importance of human-centric testing. At the heart of each of the data points feeding an algorithm lies a real person, and it is essential to have a sophisticated, rigorous form of software testing in place that harnesses the power of crowds, something that cannot be achieved in sandbox based static testing.

All of the biases outlined above can be limited by working with a truly diverse data set which reflects the mix of languages, races, genders, locations, cultures, hobbies that we see in our day-to-day life. 

In-the-wild testers can also help to reduce the likelihood of accidents by spotting errors which AI might miss, or simply by asking questions which the algorithm does not have the programmed knowledge to comprehend. 

Considering the vast wealth of human knowledge and insight available at our fingertips via the web, not making use of this opportunity is highly amiss. Spotting these kinds of obstacles early can also be incredibly beneficial from a business standpoint, allowing the developer team to create an AI product which truly meets the needs of the end user and the purpose it was created for.



This might be the first time that an AI ethics board has seriously advanced the idea as an option for combating AI bias. It’s unlikely that there are enough AI developers to find enough biases that AI can be ensured ethical, it would still help companies reduce overall bias and get a sense of what kinds of bias are leaking into their AI systems.

The authors of the paper explained that the bug-bounty concept can be extended to AI with the use of bias and safety bounties and that proper use of this technique could lead to better-documented datasets and models.


The documentation would better reflect the limitations of both the model and data. The researchers even note that the same idea could be applied to other AI properties like,



  1. Interpretability
  2. Security
  3. Privacy


As more and more discussion occurs around the ethical principles of AI, many have noted that principles alone are not enough and that actions must be taken to keep AI ethical. The authors of the paper note that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”

The co-founder of Google Brain and AI industry leader Andrew Ng also opined that guiding principles alone lack the ability to ensure that AI is used responsibly and fairly, saying many of them need to be more explicit and have actionable ideas.




The bias bounty hunting recommendation of the combined research team is an attempt to move beyond ethical principles into an area of ethical action

The research team made a number of other recommendations that companies can follow to make their AI usage more ethical:



  • They suggest that a centralized database of AI incidents should be created and shared among the wider AI community. 
  • Similarly, the researchers propose that an audit trail should be established and that these trails should preserve information regarding the creation and deployment of safety-critical applications in AI platforms.
  • In order to preserve people’s privacy, the research team suggested that privacy-centric techniques like encrypted communications, federated learning, and differential privacy should all be employed. 
  • Beyond this, the research team suggested that open source alternatives should be made widely available and that commercial AI models should be heavily scrutinized. 
  • Finally, the research team suggests that government funding be increased so that academic researchers can verify hardware performance claims.



“With rapid technical progress in artificial intelligence (AI) and the spread of AI-based applications over the past several years, there is growing concern about how to ensure that the development and deployment of AI is beneficial — and not detrimental — to humanity,” the paper reads. 

“Artificial intelligence has the potential to transform society in ways both beneficial and harmful. Beneficial applications are more likely to be realized, and risks more likely to be avoided, if AI developers earn rather than assume the trust of society and of one another. This report has fleshed out one way of earning such trust, namely the making and assessment of verifiable claims about AI development through a variety of mechanisms.”



Earlier this year regarding ethical AI the IEEE Standards Associationreleased a whitepaper calling for a shift toward,



  • “Earth-friendly AI,” 
  • The protection of children online, and
  • The exploration of new metrics for the measurement of societal well-being.




As AI continues to become omnipresent in our lives, it’s crucial to ensure that those tasked with building our future are able to make it as fair and inclusive as possible. 

This is not easy, but with a considered approach which stops to remember the human at the heart of the data we are one step closer to a safer, sensible and just reality for AI.


P.S ~ Also... here's an interesting IEEE Perspective on AI,





Jai Krishna Ponnappan is an entrepreneur, technologist, investor and information technology executive with over 15 years of industry and consulting experience. He has worked with boutique consulting firms and built successful engineering and management practices with several independent contracting businesses. Some of his clients include SalesForce, BlackRock and the City of New York. He has a Masters in Computer Information Systems from California State University, a Bachelors in Computer Science and Engineering, and an Executive MBA in Project Management. He started his career at the Indian Space Research Organization and has worked in critical leadership roles with organizations such as IBM among others.

References:

1. Cornell University. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. Cornell University Dept. of Computer Science, 2020. Available at: https://arxiv.org/abs/2004.07213

2. IEEE. Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises. IEEE Standards Association, 2020. Available at: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec-measuring-what-matters.pdf





Pentagon seeks to test & evaluate AI products



The Pentagon is looking to the industry regarding how to better test and evaluate artificial intelligence products in the pipeline to ensure safety and effectiveness.


In a request for information this week, the Pentagon’s Joint Artificial Intelligence Center(JAIC), seeks input on cutting-edge testing and evaluation capabilities to support the “full spectrum” of the Defense Department’s emerging AI technologies including:
  1. Machine learning
  2. Deep learning & 
  3. Neural networks 


Stated Objectives:
  • The Pentagon wants to augment the JAIC’s Test and Evaluation office, which develops standards and conducts algorithm testing, system testing and operational testing on the military’s many AI initiatives.
  • The Pentagon stood up the JAIC in 2018 to centralize coordination and accelerate the adoption of AI and has been building out its ranks in recent months, hiring an official to implement its new AI ethical principles for warfare.
  • The JAIC is requesting testing tools and expertise in planning, data management, and analysis of inputs and outputs associated with those tools. 

  • The introduction of AI-enabled systems brings changes to the process, metrics, data, and skills necessary to produce the level of testing the military needs, and that is the reason for requesting information.
  • Testing and Evaluation provides knowledge of system capabilities and limitations to the acquisition community and to the war-fighter. 
  • The JAIC's T&E team will make rigorous and objective assessments of systems under operational conditions and against realistic threats, so that our war fighters ultimately trust the systems they are operating and that the risks associated with operating these systems are well-known to military acquisition decision-makers.

The solicitation indicates it plans to use feedback from the solicitation to guide how it further builds out its capabilities. 


The Pentagon is interested in tech testing tools that focus on:

  • Conversational interface applications using voice to text.
  • Speech-enabled products and services for DOD applications and systems.
  • Image analysis, testing deep learning-based visual search and image classifier.
  • Natural Language Processing-enabled products and services.
  • Humans augmented by machines, to include human-machine interfaces and improved methods to measure war-fighter cognitive and physical workloads, to include augmented reality and virtual reality test services.
  •  Autonomous systems.


The Pentagon also wants feedback regarding evaluation services in five mission areas: 

  1. Dateset curation
  2. Test harness development
  3. Model output analysis
  4. Test reporting 
  5. Testing services


Finally the pentagon also seeks “other technologies” that  it may not be aware of that “may be beneficial” to testing and evaluation efforts.

~ Jai Krishna Ponnappan


Supercomputing Mobilizing against COVID19

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled, and supercomputer facilities have even begun preemptively restricting visitor access. But tech is striking back, and hard: day by day, more and more organizations are dedicating supercomputing power toward the effort to diagnose, understand and fight back against COVID-19.

Testing for COVID-19

Before supercomputers began spinning up to find a cure, researchers were scrambling to simply diagnose the disease as cases in China’s Hubei province spun out of control.

With limited (and rapidly iterated) test kits available, Chinese researchers turned to AI and supercomputing for answers. They trained an AI model on China’s first petascale supercomputer, Tianhe-1, with the aim of distinguishing between the CT scans of pneumonic patients with COVID-19 and patients with non-COVID-19 pneumonia.

In a paper, the researchers reported nearly 80% accuracy when testing this method against external datasets, dramatically outperforming early test kits as well as human radiologists:





The Summit supercomputer. The big gun was brought out early:




One of the first systems to join the fight was the world’s most powerful publicly-ranked supercomputer: Summit. Oak Ridge National Laboratory (ORNL) pitted Summit’s 148 Linpack petaflops of performance against a crucial “spike” protein on the coronavirus that researchers believe may be key to disabling its ability to infect. Testing how various compounds interact with key virus components can be an extremely time-consuming task, so the researchers – a team from ORNL’s Center for Molecular Biophysics –  were granted a discretionary time allocation on Summit, which allowed them to cycle through 8,000 compounds within a few days.

Using Summit, the research time identified 77 compounds that may be promising candidates for testing by medical researchers. “Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, director of UT/ORNL CMB and principal researcher for the study. The researchers are preparing to repeat the study using a new, higher-quality model of the spike protein recently made available.

Major organizations have opened their doors – and wallets – for coronavirus computing proposals

Last week, the National Science Foundation (NSF) issued a Dear Colleague Letter expressing interest in proposals for “non-medical, non-clinical-care research that can be used immediately to be understand how to model and understand the spread of COVID-19; to inform and educate about the science of virus transmission and prevention; and to encourage the development of processes and actions to address this global challenge.” Two days later, it issued another Dear Colleague Letter specifically inviting rapid response research proposals for COVID-19 computing activities through its Office of Advanced Cyberinfrastructure. As a complement to existing funding opportunities, the NSF also invited requests for supplemental funding.

Even with their quick response, though, the NSF weren’t the first to open their pocketbooks. In January, the European Commission announced a €10 million call for expressions of interest for projects that fight COVID-19 through vaccine development, treatment and diagnostics. Then, on the same day as the latest NSF Dear Colleague Letter, they announced an additional €37.5 million in funding.

€3 million of this funding has already been allocated to the Exscalate4CoV (E4C) program in Italy – one of the hardest-hit countries. E4C is operating through Exscalate, a supercomputing platform that uses a chemical library of over 500 billion molecules to conduct pathogen research.

Specifically, E4C is aiming to identify candidate molecules for drugs, help design a biochemical and cellular screening test, identify key genomic regions in COVID-19 and more.

Beyond E4C, the EU also highlighted “on-demand, large-scale virtual screening” of potential drugs and antibodies at the HPC Centre of Excellence for Computational Biomolecular Research, as well as “prioritized and immediate access” to supercomputers operated by the EuroHPC Joint Undertaking.

Presumably, as the NSF and European Commission funding opportunities are leveraged, high-performance computing will play an increasingly large role in the fight against the coronavirus.



Post by Jai Krishna Ponnappan

A hub of Artificial Intelligence resources by MIT


A team led by Media Lab Associate Professor Cynthia Breazeal has launched aieducation.mit.edu to share a variety of online activities for K-12 students to learn about artificial intelligence, with a focus on how to design and use it responsibly. Learning resources provided on this website can help to address the needs of the millions of children, parents, and educators worldwide who are staying at home due to school closures caused by Covid-19, and are looking for free educational activities that support project-based STEM learning in an exciting and innovative area.

A mural of hopes and questions about artificial intelligence from a school workshop.


The website is a collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning, serving as a hub to highlight diverse work by faculty, staff, and students across the MIT community at the intersection of AI, learning, and education.

"MIT is the birthplace of Constructionism under Seymour Papert. MIT has revolutionized how children learn computational thinking with hugely successful platforms such as Scratch and App Inventor. Now, we are bringing this rich tradition and deep expertise to how children learn about AI through project-based learning that dovetails technical concepts with ethical design and responsible use," says Breazeal.

The website will serve as a hub for MIT's latest work in innovating learning and education in the era of AI. In addition to highlighting research, it also features up-to-date project-based activities, learning units, child-friendly software tools, digital interactives, and other supporting materials, highlighting a variety of MIT-developed educational research and collaborative outreach efforts across and beyond MIT. The site is intended for use by students, parents, teachers, and lifelong learners alike, with resources for children and adults at all learning levels, and with varying levels of comfort with technology, for a range of artificial intelligence topics. The team has also gathered a variety of external resources to explore, such as Teachable Machines by Google, a browser-based platform that lets users train classifiers for their own image-recognition algorithms in a user-friendly way.
In the spirit of "mens et manus"—the MIT motto, meaning "mind and hand"—the vision of technology for learning at MIT is about empowering and inspiring learners of all ages in the pursuit of creative endeavors. The activities highlighted on the new website are designed in the tradition of constructionism: learning through project-based experiences in which learners build and share their work. The approach is also inspired by the idea of computational action, where children can design AI-enabled technologies to help others in their community.

"MIT has been a world leader in AI since the 1960s," says MIT professor of computer science and engineering Hal Abelson, who has long been involved in MIT's AI research and educational technology. "MIT's approach to making machines intelligent has always been strongly linked with our work in K-12 education. That work is aimed at empowering young people through computational ideas that help them understand the world and computational actions that empower them to improve life for themselves and their communities."

Research in computer science education and AI education highlights the importance of having a mix of plugged and unplugged learning approaches. Unplugged activities include kinesthetic or discussion-based activities developed to introduce children to concepts in AI and its societal impact without using a computer. Unplugged approaches to learning AI are found to be especially helpful for young children. Moreover, these approaches can also be accessible to learning environments (classrooms and homes) that have limited access to technology.
As computers continue to automate more and more routine tasks, inequity of education remains a key barrier to future opportunities, where success depends increasingly on intellect, creativity, social skills, and having specific skills and knowledge. This accelerating change raises the critical question of how to best prepare students, from children to lifelong learners, to be successful and to flourish in the era of AI.

It is important to help prepare a diverse and inclusive citizenry to be responsible designers and conscientious users of AI. In that spirit, the activities on aieducation.mit.edu range from hands-on programming to paper prototyping, to Socratic seminars, and even creative writing about speculative fiction. The learning units and project-based activities are designed to be accessible to a wide audience with different backgrounds and comfort levels with technology. A number of these activities leverage learning about AI as a way to connect to the arts, humanities, and social sciences, too, offering a holistic view of how AI intersects with different interests and endeavors.
The rising ubiquity of AI affects us all, but today a disproportionately small slice of the population has the skills or power to decide how AI is designed or implemented; worrying consequences have been seen in algorithmic bias and perpetuation of unjust systems. Democratizing AI through education, starting in K-12, will help to make it more accessible and diverse at all levels, ultimately helping to create a more inclusive, fair, and equitable future.



AI app can listen to your cough & detect COVID-19



EPFL researchers have developed an artificial intelligence-based system that can listen to your cough and indicate whether you have COVID-19.
With the new Coughvid app, developed by five researchers at EPFL's Embedded Systems Laboratory (ESL), you can record your cough on a smartphone and find out whether you might have COVID-19. So how can a smartphone app detect the new coronavirus? "According to the World Health Organization, 67.7% of people who have the virus present with a dry cough—producing no mucus—as opposed to the wet cough typical of a cold or allergy," says David Atienza, a professor at EPFL's School of Engineering who is also the head of ESL and a member of the Coughvid development team. The app is still being developed and will be released in the next few weeks.

Free and anonymous

Once the app is available, users will simply need to install it and record their cough—the results will appear immediately. "We wanted to develop a reliable, easy-to-use system that could be deployed for large-scale testing," says Atienza. "It's an alternative to conventional tests." In addition to being easy to use, the app has the advantage of being non-invasive, free and anonymous. "The app has a 70% accuracy rate," he adds. "That said, people who think they may have the disease should still go see their doctor. Coughvid is not a substitute for a medical exam."

Using artificial intelligence to help patients

Coughvid uses artificial intelligence to distinguish between different types of coughs based on their sound. "The idea is not new. Doctors already listen to their patients' coughs to diagnose whooping cough, asthma and pneumonia," says Atienza.
Right now his team is collecting as much data as possible to train the app to distinguish between the coughs of people with COVID-19, healthy people, and people with other kinds of respiratory ailments. "We'll release the app once we've accumulated enough data. It could take a few more weeks," says Atienza. In the meantime, COVID-19 patients who would like to contribute to the development work can record their cough at https://coughvid.epfl.ch/ or on the Coughvid mobile app.



AI-Search & Rescue Defense project at Sea


Australian defense project tests potential of AI at sea


The Australian Department of Defense is conducting trails to test the potential of an artificial intelligence (AI) system in the AI-Search project at sea.

Tests conducted in the search-and-rescue (SAR) trials as part of the project will recognize the potential of AI to augment and enhance SAR and to save lives at sea.

The project is conducted in collaboration with Warfare Innovation Navy Branch, Plan Jericho, the Royal Australian Air Force (RAAF) Air Mobility Group’s No 35 Squadron, and the University of Tasmania’s Australian Maritime College.



Modern AI is used for the detection of small and difficult-to-spot targets, including life rafts and individual survivors.

Plan Jericho AI lead wing commander Michael Gan said: “The idea was to train a machine-learning algorithm and AI sensors to complement existing visual search techniques.

“Our vision was to give any aircraft and other defense platforms, including unmanned aerial systems, a low-cost, improvised SAR capability.”

A series of new machine-learning algorithms were developed for the AI. Deterministic processes to analyse the imagery collected by camera sensors and aid human observers were also used.



Last year saw the first successful trial conducted aboard a RAAF C-27J Spartan. The second trial was performed last month near Stradbroke Island, Queensland.

During these trials, a range of small targets were detected in a wide sea area while training the algorithm as part of the project.

The trials highlighted the feasibility of the technology and its easy integration into other Australian Defense Forces (ADF) airborne platforms.

Warfare Innovation Navy Branch lieutenant Harry Hubbert said: “There is a lot of discussion about AI in Defense but the sheer processing power of machine-learning applied to SAR has the potential to save lives and transform it.”