Classically, the law reasons by analogy, and from precedent. The theory is that the law should deal with like situations in like ways.
Classically, the law reasons by analogy, and from precedent. The theory is that the law should deal with like situations in like ways. In some respects, however, Artificial Intelligence, especially the concept of machine learning, is virtually unprecedented, so the law is struggling with how to deal with it, or will be soon. Consider a few of the difficulties that the law will probably need to address:
Who will pay for healthcare services dependent on AI, and who will be entitled to such payments? Will those payments be keyed to "value," the currently orthodox yardstick? If so, by what means will “value” be measured, especially if, as many predict, outcomes may change unforeseeably?
Who will own the massive trove of data AI learns from and bases decisions on, and how will the rights of the owner be protected?
What governmental agencies will have a voice in regulating the use of AI in health care, and how will they rule? How will federalism issues be addressed?
Who will own the AI system’s intellectual property, and how will that owner’s rights be protected? Can a machine that has learned, as it was programmed to do, and then acted upon its learning, be seen as a creator, or as an inventor? If so, can it hold intellectual property rights in its own creations, and if so, how will those be protected, and for whose benefit? If not, who does hold such rights?
What are the implications of AI on competition law, and will antitrust authorities be implicated?
What happens if a patient is injured, or even killed, while getting AI-influenced or AI-controlled diagnosis or treatment? Will the owner of the AI system face liability in such circumstances? If so, under what theories? Fundamental to product liability claims is the proposition that the allegedly defective product reached the consumer in substantially the same condition as it was in when it left the hands of the manufacturer. How can we evaluate products claims when, as a result of machine learning, the product will not be in the same condition as it was at manufacture, and will in fact be in a condition that no one, including the programmers who created the AI, can foresee?
Will health care professionals, or institutions, face liability for unexpected outcomes alleged to have resulted from deployment of AI? If so, under what theories? Is it possible for an AI system, which theoretically is based on and improves upon the best care known, ever breach the standard of care? Will early adopter doctors be accused of breach because AI is not yet used by their “reasonably prudent” colleagues? Will the late adopter be liable because he waited too long to jump on the bandwagon?
What defenses, if any, will be available to defendants?
Could AI aggravate health disparities, or itself be a source of bias, and if so, what if anything should or can be done about it?
Can AI be deployed in those jurisdictions that prohibit the corporate practice of medicine? If so, what are the implications for patients in those jurisdictions?
This list is intended to be illustrative, not comprehensive. And it is US-centric. The complexities grow exponentially when one thinks about issues arising when AI is exported across national borders, as it almost certainly will be.
Historically, the genius of the common law has been its ability to adapt to circumstances unseen when it arose. We can be confident it will do so again. It is much harder to be confident in predicting how.
Why should you Attend: The states have regulated the practice of medicine since the earliest days of the Republic. Since at least the enactment of Medicare in 1965, however, the role of the Federal government has grown enormously, and as fast as an aggressive tumor. The welter of state and federal statutes and regulations thus governing health care in the U.S. today is probably more complex than it is in any other country at any other time in history.
Depending on the circumstances, violations, sometimes including even unwitting violations, of these authorities can result in sanctions that can be severe, or crippling, or even economically fatal. Apart from the risks associated with violations of numerous statutory and regulatory enactments, healthcare in the United States faces a constant and evolving risk of litigation, including tort, breach of contract, and a variety of other theories. And all this is just the state of affairs before artificial intelligence came on the scene. With its arrival, we face a bevy of new issues that may well take years to sort out entirely.
In the meantime, uncertainty is unavoidable. No one has the crystal ball needed to foresee all the questions, never mind all the answers. By looking at some of the problems the law will need to address before they arise, or at least before they arise at your organization or for you personally, you can be better prepared to know what to look for, to understand what developments mean, and to take action to reduce your risk and perhaps to improve your future.
Areas Covered in the Session: