Artificial Intelligence: Not Necessarily Autonomous Intelligence

ARTIFICIAL INTELLIGENCE:
NOT NECESSARILY AUTONOMOUS INTELLIGENCE

We are at the dawn of artificial intelligence. Sundar Pichai, the Chief Executive Officer of Google, says that artificial intelligence over the next ten years will have a more profound effect on human development than the discovery of fire, electricity, or the internet. Our warp speed production of Covid-19 vaccines is attributable in part to artificial intelligence.

Artificial intelligence (AI), and its subsets of machine learning (ML), hyper automation, robotic process automation (RPA), internet of things (IoT), and natural language understanding (NLU) are tools that carry enormous potential in changing our lives. They diagnose diabetic retinopathy, intercranial hemorrhages, detect lung, skin and breast cancer, map maxillofacial contours, and diagnose cardiac function, valve disease, and vascular autonomy. And so much more.

AI plays a key role in medicine. It predicts illnesses and patient outcomes. It can organize health care records. It differentiates malignancies in tumors. It assists pathologists. It can manage drug trials.

AI can also drive cars, recognize your voice, play games like chess, implement facial recognition, vacuum your home, or have a pleasant dialogue with Siri and Alexus. The possibilities are endless.

There are regulatory considerations in the development of AI. Many of the concerns below have not been resolved by the courts, but they are worth consideration.

  1. Privacy. Facial recognition AI and machine learning tools can detect, recognize, verify, and understand the characteristics of human faces. The algorithms are taught recognition by being fed millions of photographs which catalogue dozens of points for each face. Flickr reportedly has six billion images. IBM reportedly downloaded millions of the photos from Flickr and fed them to an AI machine. The downloaded photos reportedly had metadata which identified the Flickr user who uploaded the photograph, the website of the Flickr user, and where each photo was taken. Amazon acquired the data from IBM. Nobody apparently bothered to get permission or give notice to the Flickr user. Should they? See Vance v. Amazon, 525 F. Supp. 1301 (2021).
  2. Brittleness. No matter how advanced an artificial intelligence tool, it is not foolproof. It can only “think” because it has data points fed into the machine. If a datapoint is erroneously added or deleted, the tool becomes “brittle.” One article describes “brittleness” in the installation of an early warning system in Greenland in 1960. (Gillies & Smith, “Can AI systems meet the ethical requirements of professional decision-making in health care?”); Springer Nature, (August 19, 2021) The radar warned of a massive impending Soviet missle attack. It turned out that the radar signals bounced off the moon and triggered the warning. Fortunately, the operators used intuitive reasoning. At the time Khrushchev was giving a speech in New York and they intuitively “thought” that the Soviets would hardly make Khrushchev a sitting duck. The system was “brittle” because the developers did not think that a moon rise could alter the radar signals.
    ***
    Perhaps it is best that AI is not intuitive. In the movie “2001: A Space Odyssey,” the spacecraft was guided by a HAL 9000 supercomputer. HAL had intuitive reasoning, which did not work out so well for the crew.
  3. Garbage in, garbage out (GIGO). The state of Michigan retained a software developer to design an automated system to detect fraud in unemployment claims. The developer designed a decision tree where, if an employer advised the Department that the employee quit rather than was fired, the computer would send the applicant who was receiving UC benefits a letter demanding a response to the employer’s termination claim within 30 days. In many cases, the applicant didn’t get the letter because of a move to another address. Absent the response, the Department imposed a recapture of the funds and a substantial penalty. The recipient wasn’t aware of the penalties and recapture until their bank account was garnished. See Cahoo et.al. v. Fast Enterprises, 508 F. Supp.3d, 162 (E.D. Mich. 2020).
    ***
    A 2019 federal study concluded that facial-recognition systems misidentified people of color more often than white people, casting doubts on a rapidly- expanding investigative technique widely used by law enforcement across the United States. Asians and African Americans were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search, according to the study. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
    ***
    The New York Department of Financial Services opened an investigation of Apple’s credit card algorithm, alleging that it unfairly discriminated against women.
  4. Adversarial Attacks. A computer vision program that recognizes objects in an image should be able to recognize it in all kinds of angles, scales, and lighting conditions. But it turns out, if a few pixels are tweaked—something not noticed by the human eye—it can cause the AI system to produce the wrong output. These are called “adversarial attacks.” Medical AI is utilized by insurance companies in reviewing claims submissions. An adversarial attack may be as simple as submitting a photograph of a malignant mole from one patient in order to get coverage for removing a mole from another patient. Another example may be the use by an insurer of predictive models to deny an opiate prescription on the basis of a risk score. A physician who is certain that the patient needs the prescription might change the history, physical examination, and treatment to sidestep the algorithm.
  5. Overreach. Siri, the clever assistant at Apple, gets triggered when the consumer uses a “hot word,” like “Hey Siri.” Plaintiffs in a lawsuit alleged that Siri is routinely triggered without hot words. Indeed, the sound of a zipper can reportedly activate the device. In maintaining Siri, Apple routinely transmits a “small portion” of Siri recordings to outside contractors for evaluation as to whether Siri’s activation was accidental or deliberate. As a result, the contractors were alleged to be exposed to “private discussions between doctors and patients, confidential business deals, and sexual encounters.” Plaintiffs sued under the Federal Wiretap Act. See Lopez v. Apple, 519 F. Supp. 3d 672 (2021).
  6. User’s Expertise. It is asserted that in many cases, radiographic AI is more accurate than most human radiologists. Assume that the AI machine is accurate 94% of the time. Does this mean that a hospitalist with no radiology experience can run the radiograph through the AI machine and not be accountable for the 6% of the time there is a misdiagnosis? Does it mean that only a radiologist can use the machine and is responsible to look for the 6% error incidence rate? Does it mean that, if the AI machine is more accurate than a radiologist, it is malpractice not to use the machine? Is the practitioner liable if she in good faith used an AI machine that was “brittle” and had a higher error rate? Is the AI developer liable for an excessive error rate? These are among many questions not yet answered in the new frontier of AI.
  7. Trial Evidence. AI results are admissible if the machine is generally accepted by the relevant scientific community and not novel. The introduction of novel scientific evidence, however, requires a protocol that expert testimony be based on a scientific principle or procedure which has been sufficiently established to have gained general acceptance in the particular field in which it belongs. Does this mean that the introduction of the results from an AI tool requires the opposing party to get access to the source code? These are the types of legal skirmishes that occur in court regarding the science of AI.
  8. Professionalism. The practice of medicine has a strict culture driven by licensing boards, institutional accreditation, board certification, peer self-governance, codes of conduct, insurance requirements and malpractice laws. To date, there are no similar standards for AI developers other than, if the AI machine is used as a medical device, it needs FDA approval. Must the AI developer secure FDA approval when it technically is not used as a medical device? What if the AI developer is Elizabeth Holmes? The practice of medicine is aligned with the public interest in the advancement of medicine. There is no similar culture in AI development.

There are many issues that surround artificial intelligence which need to be addressed.

One basic problem is that AI can magnify bias. A recent study in Science identified that risk prediction tools in health care had significant racial bias. Another study published in the Journal of Internal Medicine found that software used by hospitals to prioritize kidney transplants had a bias against African American patients.

Another basic problem is the impact of AI on outcomes. Will the operator still be responsible for the accuracy of the outcome, or can they deflect responsibility to an AI machine and its developer? What happens when a self-driving car runs over a dog? What happens when an applicant is wrongfully denied a loan because of faulty AI? What happens if the AI inappropriately screens out job applicants?

Another consideration is judgment. Human beings are empathetic creatures, and empathy becomes an integral part of the decision-making process. We need to be aware that machines can process highly contextual situations yet neglect critical variables that have not been identified.

Some companies have appointed a Chief Ethics Officer to analyze and create AI standards as it relates to safety, fairness, bias, diversity, and privacy. But this is not an area that can be delegated wholly to the private sector. The government will inevitably and necessarily get more involved in AI compliance—to make sure that AI (if truly the biggest step forward since electricity) will not turn out like the spaceship in 2001, A Space Odyssey.

***

Swanson Hatch, P.A. is a law firm founded by two former Minnesota Attorneys General: Lori Swanson and Mike Hatch, who consecutively served as Attorneys General of the State of Minnesota for 20 years, from 1999 to 2019. Lori Swanson served as Attorney General from 2007 to 2019. Prior to that, she served as Solicitor General of the State of Minnesota and Deputy Attorney General. She also previously served as Chair of the Federal Reserve Board’s Consumer Advisory Council in Washington, D.C. She can be reached at lswanson@swansonhatch.com, or at 612-315-3037. Mike Hatch served as Attorney General from 1999 to 2007. Prior to that, he served as Commissioner of the Minnesota Department of Commerce for seven years, where he was the primary regulator of the insurance, real estate, mortgage, banking and financial services, and securities industries in Minnesota. He can be reached at mhatch@swansonhatch.com, or at 612-315-3037. Swanson and Hatch frequently advise clients on complex and cutting edge regulatory and compliance matters..

***


www.swansonhatch.com
431 S Seventh Street, Suite 2545
Minneapolis, MN 55415
612-315-3037

The materials in this article are for informational purposes and do not constitute legal advice, nor does your unsolicited transmission of information to us create a lawyer-client relationship. Sending us an email will not make you a client of our firm. Until we have agreed to represent you, nothing you send us will be confidential or privileged. Readers should not act on information contained in this article without seeking professional counsel. The best way for you to inquire about possible representation is to contact an attorney of the firm. Actual results depend on the specific factual and legal circumstances of each client’s case. Past results do not guarantee future results in any matter.


Disclosure | Unsubscribe

Swanson | Hatch, P.A.
431 S. 7th Street, Suite #2545
Minneapolis, MN 55415
612-315-3037

www.swansonhatch.com