Category Archives: AI

ChatGPG on Republican Meanness

So I took ChatGPG for a spin.  Overall the results are really scarily good.  But.

Or maybe that is the best answer?

Posted in AI, Completely Different, Politics: US | 1 Comment

On Robot & AI Personhood

The question of robot and AI personhood comes up a lot, and likely will come up even more in the future with the proliferation of models like GPT-3, which can be used to mimic human conversations very very convincingly. I just finished a first draft of a short essay surveying contemporary issues in robot law and policy; that gave me a chance to briefly sketch out my views on the personhood issue, and I figured I might share it here:

As the law currently stands in the United States and, as far as I know, everywhere else, 1 the law treats all robots of every type as chattel. That is, in the words of Neil Richards and William Smart, “Robots are, and for many years will remain, tools. They are sophisticated tools that use complex software, to be sure, but no different in essence than a hammer, a power drill, a word processor, a web browser, or the braking system in your car.” 2 It follows that robot personhood (or AI personhood) under law remains a remote prospect, and that some lesser form of increased legal protections for robots, beyond those normally accorded to chattels in order to protect their owners’ rights, also remain quite unlikely. Indeed, barring some game-changing breakthrough in neural networks or some other unforeseen technology, there seems little prospect that in the next decades machines of any sort will achieve the sort of self-awareness and sentience that we commonly associate with a legitimate claim to the bundle of rights and respect we organize under the rubric of personhood. 3

There are, however, two different scenarios in which society or policymakers might choose to bestow some sort of rights or protections on robots beyond those normally given to chattels. The first is that we discover some social utility in the legal fiction that a robot is a person. No one, after all, seriously believes that a corporation is an actual person, or indeed that a corporation is alive or sentient, 4 yet we accept the legal fiction of corporate personhood because it serves interests, such as the ability to transact in its own name, and limitation of actual humans’ liability, that society—or parts of it—find useful. Although nothing at present suggests similar social gains from the legal recognition of robotic personhood (indeed issues of liability and responsibility for robot harms need more clarity, not less accountability), conceivably policymakers might come to see things differently. In the meantime, it is likely that any need for, say, giving robots the power to transact. can be achieved through ordinary uses of the corporate form, in which a firm might for example be the legal owner of a robot. 5

Early cases further suggest that U.S. courts are not willing to assign a copyright or a patent to a robot or an AI even when it generated the work or design at issue. Here, however, the primary justification has been straightforward statutory construction, holdings that the relevant U.S. laws only allow intellectual property rights to be granted to persons, and that the legislature did not intend to include machines within the that definition. 6 Rules around the world may differ. For example an Australian federal court ordered an AI’s patent to be recognized by IP Australia. 7 Similarly, a Chinese court found that an AI-produced text was deserving of copyright protection under Chinese law. 8

A more plausible scenario for some sort of robot rights begins with the observation that human beings tend to anthropomorphize robots. As Kate Darling observes, “Our well-documented inclination to anthropomorphically relate to animals translates remarkably well to robots,” and ever so much more so to lifelike social robots designed to elicit that reaction—even when people know that they are really dealing with a machine. 9 Similarly, studies suggest that many people are wired not only to feel more empathy towards lifelike robots than to other objects, but that as a result, harm to robots feels wrong. 10 Thus, we might choose to ban the “abuse” of robots (beating, torturing) either because it offends people, or because we fear that some persons who abuse robots may develop habits of thought or behavior that will carry over into their relationships with live people or animals, abuse of which is commonly prohibited. Were we to find empirical support for the hypothesis that abuse of lifelike, or perhaps humanlike, robots makes abusive behavior towards people more likely, that would provide strong grounds for banning some types of harms to robots—a correlative  11 to giving robots certain rights against humans. 12

It’s an early draft, so comments welcome!


Notes

  1. The sole possible exception is Saudi Arabia, which gave ‘citizenship’ to a humanoid robot, Sophia, in 2017. It is hard to see this as anything more than a publicity stunt, both because female citizenship in Saudi Arabia comes with restrictions that do not seem to apply to Sophia, and because “her” “life” consists of … marketing for her Hong-Kong-based creators. ((See Emily Reynolds, The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing, Wired (Jan. 6, 2018), https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics.[]
  2. Neil Richards & William Smart, How Should the Law Think About Robots? in Robot Law 1, 20 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016).[]
  3. For an interesting exploration of the issues see James Boyle, Endowed by Their Creator? The Future of Constitutional Personhood, Brookings Institution (Mar. 9, 2011). For a full-throated denunciation of the ‘robots rights’ concept as philosophical error and ethical distraction, see Abeba Birhane & Jelle van Dijk, Robot Rights? Let’s Talk about Human Welfare Instead, Proceedings of the 3rd AAAI/ACM Conference on AI, Ethics, and Society 207-213 (Feb. 7, 2020).[]
  4. Although, Charlie Stross has suggested we should think of corporations as “Slow AIs”. Charlie Stross, Dude, you broke the future!, Charlie’s Diary (Jan. 2, 2018), https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html (transcript of remarks to 34th Chaos Communication Congress, Leipzig, Dec. 2017).[]
  5. For speculation as to how a robot or AI might own itself, without people in the ownership chain see Shawn J. Bayern, Autonomous Organizations (2021); Shawn J. Bayern, Are Autonomous Entities Possible?, 114 Nw. U. L. Rev. Online 23 (2019); Lynn LoPuki, Algorithmic Entities, 95 U.C.L.A. L. Rev. 887 (2018). []
  6. See Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (upholding USTO decision refusing application for patent in name of AI). For arguments in favor of granting such patents, see, e.g., Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079 (2016). For a European perspective see P. Bernt Hugenholtz & João Pedro Quintais, Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?, 52 IIC – Int’l Rev. Int’l. Prop. and Competition L. 1190 (2021). The recent literature on the copyrightability of machine-generated texts is vast, starting with Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev. 5 (2012). An elegant recent article disagreeing with Bridy, with many citations to the literature, is Carys Craig & Ian Kerr, The Death of the AI Author, 52 Ottawa L. Rev. 31 (2021).[]
  7. Commissioner of Patents v. Thaler (DABUS), [2022] FCAFC 62. []
  8. Paul Sawers, Chinese court rules AI-written article is protected by copyright, VentureBeat (Jan. 10, 2020), https://rai2022.umlaw.net/wp-content/uploads/2022/02/16_Chinese-court-rules-AI-written-article-is-protected-by-copyright.pdf.[]
  9. Clifford Nass & Youngme Moon, Machines and Mindlessness: Social Responses to Computers. 56 J. Soc. Issues 81 (2000); Kate Darling, Extending Legal Protection to Social Robots in Robot Law 213, 214, 220 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016).  []
  10. Darling, supra note 9, at 223.[]
  11. In Hohfeldian terms, if persons have a duty not to harm a robot, then, correlatively, the robot has right not to be harmed by those persons. See Pierre Schlag, How to Do Things with Hohfeld, 78 L. & Contemp. Probs. 185, 200-03 (2014). Hohfeld was concerned with the relations of persons, and probably would have thought the idea of property having rights to be a category error. Yet if the duty to forbear from certain harms extends to the owner of the robot as well as others, I submit that the “rights” term of the correlative relations is a useful way to describe what the robot has.[]
  12. Darling, supra note 9, at 226-31.[]
Posted in AI, Robots | 2 Comments

#WeRobot Finished With a Bang!

(Metaphorically, only.)

We will have recordings of substantially all the discussions up online in about a week.

Meanwhile, you can still read the papers.  You might want to start with the prize-winners:

… although I’d also like to give a shout-out to two of my personal favorites:

That said, the papers all were really good, which is pretty amazing.

Posted in AI, Robots, Talks & Conferences | Comments Off on #WeRobot Finished With a Bang!

#WeRobot 2021 Starts Today!

Join us for the 10th Anniversary Edition – Register Here. All events will be virtual. All times are US Eastern time.

At We Robot we ask (and expect) that everyone reads the papers scheduled for Days One and Two in advance of those sessions. (The Workshops do not have advance papers.) In most cases, authors do not deliver their papers. Instead we go straight to the discussant’s wrap-up and appreciation/critique. The authors respond briefly, and then we open it up to Q&A from our fabulous attendee/participants. Click on the paper titles below to download a .pdf text of each paper. Enjoy! Or you can download a zip file of Friday’s papers and Saturday’s papers.

We Robot 2021 Program

Download full schedule to your calendar.

We Robot 2021 will be hosted on Whova. We’ve prepared a We Robot 2021 Attendee Guide. You can also Get Whova Now.

We Robot 2021 has been approved for 19.0 Florida CLE credits, including 19.0 in technology, 1.0 in ethics, and 3.5 in bias elimination. Details here.

[table id=6 /]

[table id=7 /]

[table id=8 /]

Posted in AI, Robots, Talks & Conferences | Comments Off on #WeRobot 2021 Starts Today!

We Robot is Next Week!!!

WeRobot 2021

We Robot, now heading into its 10th anniversary, is the leading North American conference on robotics law and policy. The 2021 event will be hosted by the University of Miami School of Law on September 23 – 25, 2021.

NOW VIRTUAL
Due to safety concerns we’ve decided to take We Robot to a fully virtual format again.

Earn CLE
19.0 Florida CLE credits approved, including 19.0 in technology, 1.0 in ethics, and 3.5 in bias elimination.

Register Today!

New virtual prices:
Workshop on Sept. 23: $25.00
Admission for both days, Sept. 24 & 25: $49.00
All students and UM Faculty for all 3 days: $25.00

Although we’d looked forward to welcoming you back to Coral Gables and will not be able to see you in person, we look forward very much to your virtual participation in We Robot 2021. The heart of We Robot has always been its participants, and we will do all we can to preserve that. See you (virtually) soon!

For more information, visit WeRobot2021.com

See Full Program

September 23 – 25, 2021

Posted in AI, Robots, Talks & Conferences | Comments Off on We Robot is Next Week!!!

We Robot Paper Submission Deadline Extended One Week

Everyone says it’s harder to get things done under COVID, so we’re extending the deadline for submission of paper abstracts to We Robot 2021 by one week – to midnight US East Coast time on February 8, 2021.

We will attempt to keep to the rest of the schedule, but paper acceptance notices may end up slightly delayed also.

Posted in AI, Robots, Talks & Conferences | Comments Off on We Robot Paper Submission Deadline Extended One Week