The Judgement Days Of Artificial Intelligence
What is the value of life and why? Whose life matters most?
As someone who is old enough to remember a world without the internet and smartphones, I am also young enough to wonder where humanity is heading as computers ‘evolve’ into the unchartered territory of artificial intelligence (AI).
For example, a recent report by AP shows that in an increasing number of local and state courtrooms around America, “judges are now guided by computer algorithms before ruling whether criminal defendants can return to everyday life or remain locked up awaiting trial.” Read more https://www.rt.com/op-ed/417782-robots-scientists-technology-intelligence/
In early 2016, the Office of Personnel Management, the human resources agency for federal employees, began researching software that would track the social media accounts of security clearance applicants. The agency was reportedly looking to contract with companies that could do searches with almost no need for human input and had a “robust identity-matching algorithm” to cut down on mix-ups.
John Fernandez is an Obama-era Commerce Department official who is now the global chair of a venture capital firm called Nextlaw Labs, which focuses on applying cutting-edge technologies to the legal world. When it comes to automation, he says, government is “a target-rich environment, there’s no question about that.” Read more: https://www.politico.com/magazine/story/2018/01/05/washington-automation-congress-politics-lobbying-policy-216216
Artificial intelligence prevails at predicting Supreme Court decisions. For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower court’s decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year. Read more: http://www.sciencemag.org/news/2017/05/artificial-intelligence-prevails-predicting-supreme-court-decisions
“I think perhaps one of the most important things, before we talk about efficacy, accuracy and fairness, is that this is a commercial entity and the details of the algorithm are very tightly kept secret,” “So, these decisions are being made inside of a black box where we don't know what is happening and I think, ... given the stakes here, that should be a little concerning to us." Read more: https://www.pri.org/stories/2018-02-03/growing-trend-using-predictive-algorithms-courtrooms-and-human-services-offices
Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or "black boxed", meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools. Read more: https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
“Why are you making people wait six months to two years to go through a court case?” he said. “If they truly believed that it’s not fair that someone who can’t afford bail has to stay in jail, give them a speedy trial.” - Corrin Rankin, owner of Out Now Bail Bonds in Redwood City and area director of the California Bail Agents Association.
San Francisco is seeking to modernize its bail system by using a computer algorithm to predict whether a defendant might re-offend or bolt if freed from jail, an effort to reform long-standing practices that many in the city’s justice system believe penalized the poor and opened potential racial bias. Read more: https://www.sfchronicle.com/crime/article/Seeking-a-better-bail-system-SF-turns-to-8899654.php
A similar algorithm created by Northpointe was used to guide prisoner release dates. ProPublica published the results of an investigation in May that found that the “risk assessment” scores given by Northpointe’s algorithm disproportionately predicted that black people were more likely to commit another crime than white people after they go Northpointe uses 137 questions to judge the potential of people committing a crime.
As ProPublica reports, the questions avoid asking about race outright. Some of the questions may be related, however.Examples of the questions include: “Was one of your parents ever sent to jail or prison?” and “How often did you get in fights while at school?” There are also yes/no questions such as “A hungry person has a right to steal.” Other factors include education levels and whether a person has a job.t out. Just 20 percent of Northpointe’s predictions were accurate. Read more: https://www.inverse.com/article/16019-study-algorithm-that-s-biased-against-blacks-behind-poor-inmate-recidivism-predictions
Our analysis of Northpointe’s tool, called COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions), found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk. We found that sometimes people’s names or dates of birth were incorrectly entered in some records – which led to incorrect matches between an individual’s COMPAS score and his or her criminal records. Read more: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Rating a defendant’s risk of future crime is often done in conjunction with an evaluation of a defendant’s rehabilitation needs. The Justice Department’s National Institute of Corrections now encourages the use of such combined assessments at every stage of the criminal justice process.
In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing. Read more: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Algorithms are just as biased as the people who make them. An investigation by ProPublica found that Northpointe, creators of an algorithm that attempts to predict an incarcerated person’s likelihood to commit another crime, predicts black people are much more likely to commit a crime after getting out than white people. The algorithm creates a “risk assessment” score for inmates. It is used across the country to guide prison release dates. After analyzing data for more than 7,000 people in Broward County, Florida, the reality of racist algorithms is clear: black defendants were almost twice as likely than white defendants to get a prediction of committing another crime. Read more: https://www.inverse.com/article/16019-study-algorithm-that-s-biased-against-blacks-behind-poor-inmate-recidivism-predictions
Arizona, Kentucky, and Alaska have so far adopted these algorithms to potentially identify those who are most likely to run away or commit dangerous crimes should they be set free. Robots are now expected to displace humans in almost every field, and that’s something we should either embrace or reject altogether. Read more: https://sanvada.com/2018/02/07/artificial-intelligence-replace-judges-courtroom/
Cash bail, which is designed to ensure that people charged of crimes turn up for trial, has been part of the United States court system for centuries. But it has drawn fire in recent years for keeping poorer defendants in jail while letting the wealthier go free. Studies have also shown it widens racial disparities in pretrial incarceration.
States such as New Jersey, Arizona, Kentucky, and Alaska have adopted these tools. Defendants who receive low scores are recommended for release under court supervision. Read more: https://www.csmonitor.com/USA/Justice/2018/0131/Artificial-intelligence-plays-budding-role-in-courtroom-bail-decisions
Instead of holding criminal defendants on cash bail, courts around the United States are increasingly using algorithmic risk-assessment tools to help judges decide if a defendant should be jailed or go free while awaiting trial. The AI system used in New Jersey, developed by the Houston-based Laura and John Arnold Foundation. The Arnold Foundation's model uses these nine risk factors to evaluate a defendant, including age and past criminal convictions. (Age at current offense, Current violent offense, pending charge at the time of the offense, Prior misdemeanor conviction, Prior felony conviction, Prior violent conviction, Prior failure to appear to a court hearing in the past two years, Prior failure to appear older than two years, Prior sentence to incarceration).
The Arnold Foundation notes that its algorithm is straightforward and open to inspection by anyone – although the underlying data it relies on is not. Read more: https://www.usnews.com/news/best-states/new-jersey/articles/2018-01-31/judging-by-algorithm-using-risk-factors-to-score-defendants
That tool is part of a larger package of bail reforms that took effect in January 2017, effectively wiping out the bail-bond industry, emptying many jail cells and modernizing the computer systems that handle court cases. "We're trying to go paperless, fully automated," said Judge Ernest Capossela, who helped usher in the changes at the busy Passaic County courthouse in Paterson, New Jersey. Read more: http://abcnews.go.com/Technology/wireStory/ai-court-algorithms-rule-jail-time-52732343
Data-driven tool gives Harris County judges new way to assess defendants’ pretrial risk level. “Every day our judges face questions involving pretrial bail: If this person is released from jail, will he show up to court? Will he be arrested for something else? Are there conditions we can impose to better ensure his appearance and protect the public?
The Public Safety Assessment will help inform these important decisions,” explained Hon. Margaret Harris, Presiding Judge of County Criminal Courts. “We thank fellow Houstonians Laura and John Arnold for their innovative work in this area, for their commitment to making a difference in Harris County, and for entrusting us with this new tool for justice.” Read more: http://www.arnoldfoundation.org/data-driven-tool-gives-harris-county-judges-new-way-assess-defendants-pretrial-risk-level/
A system called VALCRI has already been developed to perform the labor-intensive aspects of a crime analyst’s job in mere seconds – wading through tones of data like texts, lab reports and police documents to highlight things that may warrant further investigation. The UK’s West Midlands Police will be trialing VALCRI for the next three years using anonymized data – amounting to some 6.5m records. A similar trial is underway from the police in Antwerp, Belgium. However, past AI and deep learning projects involving massive data sets have been have been problematic. Read more: http://theconversation.com/why-using-ai-to-sentence-criminals-is-a-dangerous-idea-77734
The STAR-SKAN algorithms can resolve the knottiest legal questions in seconds. A new app funded by conservative interest groups has the potential to streamline our justice system. This app, the Scalia, Thomas, Alito, Roberts, Sometimes Kennedy and Now We’ve Got Neil Gorsuch Know-It-All System, or STAR-SKAN, uses artificial intelligence to distill the views of present and former Republican appointees to the Supreme Court into a handful of algorithms that can be uploaded and stored forever on a computer or iPhone and used to decide any issue touching on constitutional law. Read more: https://www.law.com/newyorklawjournal/sites/newyorklawjournal/2017/11/28/artificial-intelligence-and-constitutional-law/
“So, if we’re already in the business of trying to predict crime, why not make it a little bit more formal? At least with these risk assessment algorithms it’s transparent. You know what inputs are being included in this decision and what inputs are not. Race, for instance, is an input that is not allowed to be used in these. So even though there’s indirect ways they embed racial biases, they don’t have the same direct racism that many judges might have,” Stevenson said. Read more: http://www.louisvillecardinal.com/2018/01/55428/
Being able to tell when a person is lying is an important part of everyday life, but it’s even more crucial in a courtroom. University of Maryland (UMD) developed the Deception Analysis and Reasoning Engine (DARE), a system that uses artificial intelligence (AI) to autonomously detect deception in courtroom trial videos. The team of UMD computer science researchers led by Center for Automation Research (CfAR) chair Larry Davis describe their AI that detects deception in a study that’s still to be peer-reviewed. Read more: https://futurism.com/new-ai-detects-deception-bring-end-lying-know-it/
AI System Detects ‘Deception’ in Courtroom Videos. Basically, it uses computer vision to identify and classify facial micro-expressions and audio frequency analysis to pick out revealing patterns in voices. The resulting automatic deception classifier was found to be almost 90 percent accurate, handily beating out humans assigned to the same task. This was based on evaluations of 104 mock courtroom videos featuring actors instructed to be either deceptive or truthful. Read more: https://motherboard.vice.com/en_us/article/zmqv7x/ai-system-detects-deception-in-courtroom-videos
Researchers concluded that DARE was better at detecting if someone was lying than humans. They said: “Our vision system, which uses both high-level and low level visual features, is significantly better at predicting deception compared to humans.” Read more: https://www.standard.co.uk/news/world/artificial-intelligence-system-could-be-used-to-detect-if-people-are-lying-in-court-a3724221.html
Law points the direction to follow, and the courts ensure we keep that course. Part of the great challenge of today’s justice system is that the world has changed, but courts have not. In this regard, we lawyers are guilty as charged, for we hang on to this old outdated court system just like traditional taxi drivers fought the unavoidable change Uber brought to our lives.
We do jogging, weight lifting, drink carrot juice and eat healthy…so that we may die healthy. We are in search of a healthy death. A kind of ‘I’m dying but I feel good’ approach.
AI will open a Pandora’s box; we must be prepared. Law must be an agent of change, not an obstacle to progress. Our outdated court system, laws and policies are not fit to face these challenges. https://www.nation.co.ke/oped/blogs/dot9/franceschi/2274464-4283164-qe6kss/index.html
The Partnership for Public Service has issued a report that examines how artificial intelligence (AI) is being used by federal and state authorities in a variety of areas. The Partnership for Public Service is a nonpartisan, nonprofit organization based in Washington, D.C., that works to revitalize government service. https://www.natlawreview.com/article/partnership-public-service-releases-white-paper-using-artificial-intelligence-to
Amid the dire - and somewhat overhyped - predictions of occupations that will be decimated by artificial intelligence and automation, there is one crumb of comfort. Yes, lorry drivers, translators and shop assistants are all under threat from the rise of the robots, but at least the lawyers are doomed too. (Some of my best friends are lawyers, honest.). Read more: http://www.bbc.com/news/technology-41829534
Conference Report: The Legal AI Workshop at Legal Tech New York. The application of AI for the legal industry is believed to have first been proposed in 1971, in the Stanford Law Review. Panelists differentiated between applications for the business of law, versus the practice of law, noting that the former was realizing more benefits, for example by reducing outside counsel spend. https://www.artificiallawyer.com/2018/01/30/conference-report-the-legal-ai-workshop-at-legal-tech-new-york/
Some useful links:
In a growing number of local and state courts in the U.S., judges are now guided by computer algorithms before ruling whether criminal defendants can return to everyday life or have to stay locked up! Read more: https://www.dailysabah.com/technology/2018/02/02/artificial-intelligence-starts-to-rule-on-jail-time
“Now, with data, even junior lawyers – or potentially even clients – can determine the odds of success in a particular matter. That is threatening to lawyers.” Read more: http://www.legalweek.com/sites/legalweek/2017/12/19/top-20-legal-it-innovations-2017-how-codex-is-connecting-the-artificial-intelligence-and-legal-communities/?slreturn=20171121153419
“Law firms want to win their case for their client. That is what matters more than anything else. And if using an AI system helps, then…. well, the answer is clear”. Read more: https://www.artificiallawyer.com/2018/01/17/artificial-lawyer-interview-jake-heller-ceo-casetext/
Robot judges? Edmonton research crafting artificial intelligence for courts Read more: http://www.cbc.ca/news/canada/edmonton/legal-artificial-intelligence-alberta-japan-1.4296763
Artificial intelligence to enhance Australian judiciary system. People being judged by machines feels very Orwellian, or Terminator-like. Read more: http://www.swinburne.edu.au/news/latest-news/2018/01/artificial-intelligence-to-enhance-australian-judiciary-system.php
E-DIVORCE: How artificial intelligence could help Australian couples break up quickly and cheaply. Read more: http://www.businessinsider.com/e-divorce-how-artificial-intelligence-could-help-australian-couples-break-up-quickly-and-cheaply-2017-8
Could AI Transform China’s Legal System? https://www.caixinglobal.com/2017-12-11/could-ai-transform-chinas-legal-system-101183154.html
Should AI be in charge of sentencing Australia's criminals? Read more: https://www.itnews.com.au/news/should-ai-be-in-charge-of-sentencing-australias-criminals-482079
How To Reboot The Justice System On Technology https://abovethelaw.com/2018/01/the-innovation-gap-part-2-how-to-reboot-the-justice-system-on-technology/?rf=1
Politicians and Innovators Agree: It’s Impossible to Govern AI https://futurism.com/govern-ai/
When algorithms rule on jail time http://registerguard.com/rg/business/36402104-63/when-algorithms-rule-on-jail-time.html.csp
What happens when America has a government for the people, by robots? https://www.politico.com/magazine/story/2018/01/05/politico-magazine-editors-note-future-work-216221
Why Dentons’ global plan deserves credit? https://www.thelawyer.com/dentons-how-we-took-over-the-world/
S.2123 - Sentencing Reform and Corrections Act of 2015
https://www.congress.gov/bill/114th-congress/senate-bill/2123/text
Post-Conviction Risk Assessment
http://www.uscourts.gov/statistics-reports/publications/post-conviction-risk-assessment
http://www.uscourts.gov/sites/default/files/pcra_sep_2011_0.pdf
Risk, Race, & Recidivism: Predictive Bias and Disparate Impact https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2687339
28 States Have Adopted Ethical Duty of Technology Competence https://www.lawsitesblog.com/2015/03/11-states-have-adopted-ethical-duty-of-technology-competence.html
Wondering How Blockchain Can Be Used in Legal? Here’s One: Service of Process https://www.lawsitesblog.com
Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) https://aaai.org/Conferences/AAAI-18/
Darren Aronofsky next project will be a courtroom drama that somehow involves artificial intelligence. http://www.slashfilm.com/darren-aronofsky-ai/
Videos:
New Jersey eliminates most cash bail, leads nation in reforms https://www.pbs.org/newshour/show/new-jersey-eliminates-cash-bail-leads-nation-reforms
Moot Court 2020: Legal Tech on Trial
The Technology Revolution, Lawyers and Courts: Why So Slow, and How Can We Accelerate Change?
FutureLaw Hot or Not – Watson and Beyond
E-Government
The Role of Technologists in Reforming the Criminal Justice System