Scientists Fear Trump Will Dismiss Blunt Climate Report – “The Times has reviewed an alarming draft report by government scientists who say climate change is happening now and severely affecting the United States, but the Trump administration must sign off on it before it can be released – The New York Times reports: “The draft report by scientists from 13 federal agencies, which has not yet been made public, concludes that Americans are feeling the effects of climate change right now. It directly contradicts claims by President Trump and members of his cabinet who say that the human contribution to climate change is uncertain, and that the ability to predict the effects is limited….The report [673 pages] was completed this year and is part of the National Climate Assessment, which is congressionally mandated every four years.”
Read more of this story at Slashdot.
Read more of this story at Slashdot.
The petition of the day is:Lozman v. City of Riviera Beach, Florida 17-21
Issue: Whether the existence of probable cause defeats a First Amendment retaliatory-arrest claim as a matter of law.
Read more of this story at Slashdot.
Symposium: Mind the gap? The efficiency gap, its failures and the “problem” of geography and choice in redistricting
Chris Winkelman is general counsel to the National Republican Congressional Committee, which filed an amicus brief in support of the state appellants in Gill v. Whitford. Philip Gordon is an associate at Holtzman Vogel Josefiak Torchinsky PLLC and contributed to the NRCC’s brief.
In the dark recesses of single-origin-coffee shops, natural grocery stores, microbreweries and free-range-egg-and-eight-dollar-mimosa brunches in places as far flung as San Francisco, Brooklyn and Washington, D.C., people are coming to a profound, and to them, disturbing revelation: The United States is a republic. As much of a shock as this must be to them, the Constitution spells out how one of our country’s most fundamental acts of representative government is undertaken — choosing the “Times, Places, and Manner of holding Elections.” This is a power the Constitution gives to state legislatures in Article 1, Section IV, and like most other parts of the Constitution, it is subject to certain constraints.
First, the same section of the Constitution gives Congress the power to “make or alter” election regulations. Congress in fact used that power on multiple occasions when it enacted the Voting Rights Act, the various Reapportionment Acts and the requirement of single-member districts for Congressional seats. Second, the Supreme Court imposed requirements such as “one-person, one-vote” to comply with the equal protection clause of the Constitution. Third, many state constitutions or statutes impose various criteria on redistricting, such as single-member districts, compactness and contiguity. Finally, the framers and the Supreme Court understood that redistricting, as Justice Byron White put it, is “intended to have substantial political consequences.” The history of redistricting in the United States, if it teaches us anything, teaches us that redistricting is a political act, was intended to be a political act, and has always been a political act. This simple truth has been difficult for the plaintiffs in Gill v. Whitford to accept, and has proven even more difficult for the Supreme Court.
The Supreme Court has wrestled with the concept of how much partisanship is too much since first finding partisan-gerrymandering claims justiciable in Davis v. Bandemer, although the court sidestepped an earlier opportunity to address the question in 1932 in Wood v. Broom. Various tests have been offered by litigants in the hopes of finding a judicially manageable standard, as yet to no avail. It has been over 30 years since the court’s decision in Bandemer and the court has still not found the long sought-after desiderata of partisan gerrymandering. Just when plaintiffs, distraught that their failures at the ballot box cannot be saved by wins in the courtroom, had given up hope of ever finding a standard that would meet with the approval of five justices, come the plaintiffs in this case with a “scientific” method of determining impermissible partisan gerrymandering: the so-called “efficiency gap.”
The scientific-sounding efficiency gap simply measures the difference in parties’ “wasted” votes. The efficiency gap counts any vote as wasted if that vote was for a losing candidate or was more than what the prevailing candidate needed to win a given election (i.e., 50 percent of the vote plus one in a two-party election). These supposedly wasted votes are then divided by the total number of votes in an election, and the resulting number is the misleadingly named efficiency gap. However, even a cursory inspection of this so-called methodology reveals analytical flaws and partisan skullduggery too blatant to pass constitutional muster or stand up to common sense.
A fundamental problem with the efficiency gap is that it treats voters as monolithic blocs who vote party above all else. This assumption is contrary to reality. The efficiency gap, much like most statistical election models, attempts to predict the future. The efficiency gap is particularly bad at predicting the future because it relies on the results of a single statewide election for its calculation, aggregating a series of district-by-district elections. As recent elections have laid bare, the assertions that voters 1) will never change their mind, and 2) vote for the party only and not the candidate, are not supported by actual election outcomes. The efficiency gap does not account for vote switchers or split-ballot voting. In fact, the authors of the efficiency gap state that a gap of eight percent ought to be sufficient to render a legislative reapportionment a justiciable partisan gerrymander.
On the nationwide level one need only look at the difference between the 2012 and 2016 presidential elections to see that voters vote based on candidate, not party. In the wake of the 2016 presidential election, multiple studies were conducted to determine how many people voted for President Barack Obama and then voted for President Donald Trump. It has been determined that anywhere from 11 percent to 15 percent of voters switched their votes.
Things do not improve for the efficiency gap at the congressional level either. In the 2016 general election, there were 12 Democratic members of Congress who won election in districts where Trump won the vote. Similarly, there were 23 Republican members of Congress who won in districts where Hillary Clinton won the popular vote. Having watched these races develop, I can tell you — and the results bear out — that candidates matter, issues matter and campaigns matter. While the Democrats clung to a strategy that asked voters to mostly ignore their congressional candidates in favor of focusing on the Republican presidential candidate, Republicans focused on the candidates and the issues district by district.
In a more recent example, Secretary of Health and Human Services Tom Price had consistently won the Georgia 6th congressional district by 20 to 30 percent. The voters of the Georgia 6th elected Karen Handel in a 2017 special election by less than a four percent margin. One could simply chalk this up to a lack of incumbency, but that ignores something very interesting about elections in the United States: If incumbents matter as much as they do, then so does a person’s choice to vote for the incumbent. To put it another way, if 20 percent fewer people vote for a candidate of the same party as the previous incumbent, that seems to reinforce rather than erode the idea that people vote for the person and not the party.
As long as Democrats live in “The Bubble,” they will continue to be at a disadvantage in redistricting
The efficiency gap is mired in a plethora of problems, both methodological and quantitative. There is not nearly enough time or space to fully document them all here. The biggest single problem with the efficiency gap is that it assumes that political populations are relatively evenly dispersed geographically. Scientific literature and common-sense experience do not support that assumption. Currently, Democrats in the United States are mostly clustered in urban areas, while Republicans tend to inhabit more suburban and rural areas. “Saturday Night Live,” in its skit called “The Bubble,” captured this perfectly when they asked, “What if there was a place where the unthinkable (President Trump’s election) didn’t happen and life could continue for progressive Americans just as before?” Their answer was as funny as it was unsurprising, “Well, now there is… The Bubble is a community of likeminded freethinkers, and no one else.” The skit ends with the discovery that “The Bubble” is just Brooklyn, New York, “with a bubble on it.”
While amusing, this skit highlights what we all know and see when we look at a county-by-county map after a presidential election. There is a sea of red counties interspersed with pinpoints of blue. This feature of the United States’ political geography is well documented in political-science literature and applies equally to Wisconsin, the state at issue in this case, where Democrats cluster in and around the major cities of Madison and Milwaukee and Republicans inhabit the remainder of the state. Justices Anthony Kennedy and Antonin Scalia both noted this clustering effect in Vieth v. Jubelirer, as did Justice Sandra Day O’Connor in Bandemer.
This asymmetrical grouping of voters has real-world consequences on attempts to form legislative districts using traditional districting criteria (compactness, contiguity, equal population etc.). Traditional districting criteria exist, at least in part, to give courts and map makers some guidelines for evaluating maps to ensure compliance with the equal protection clause of the Constitution. Given the focus that the Supreme Court has placed on the shapes of districts, the lower court could be forgiven for rejecting the challenged maps in Wisconsin’s Act 43 if the shapes of the district boundaries were particularly egregious. They were not. The plaintiffs in this case even conceded that the challenged districts were relatively compact and contiguous and that they met the requirements of “one-person one-vote.” I urge readers to compare Wisconsin’s Act 43 maps to maps in states like Maryland and Illinois. Wisconsin’s maps look pedestrian by comparison. To put it another way, if the maps were a coloring book, Wisconsin’s would be for toddlers with crayons and Maryland’s would be for adults with very sharp pencils. However, the two-judge district court majority, in a novel approach, eschewed traditional districting criteria in favor of the efficiency-gap test in order to rule that Republicans had given themselves an overwhelming unconstitutional electoral advantage over the life of Act 43.
The district court ignored the actual impact that its decision will have. Compactness, as Kennedy said in Vieth, helps Republicans because of the effect of political geography. In order to comply with this new efficiency-gap standard, the Wisconsin General Assembly would have to create maps that are less compact and contiguous. The Supreme Court has long lamented the snakes, “sacred Mayan bird[s],” “Rorschach ink-blot test[s]” and “uncouth twenty-eight-sided figure[s]” that creative cartographers have made into legislative districts. Yet, in this case, the challengers are asking the court to force state legislatures across the country to fix the Democrats’ political geography problem by ignoring years of precedent to make less compact and contiguous maps. The court should roundly reject this invitation.
Given all of the flaws inherent in the efficiency-gap theory, and the tenuous nature of Supreme Court jurisprudence in this area, we can only hope that, after 30 years of uncertainty, the court will finally decide what the Constitution has always demanded: In a constitutional republic, it must be the people’s representatives who draw districts, not the courts and certainly not unelected academics promoting flawed and biased notions like the “efficiency gap.”
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Thomas P. Wolf is counsel for the Democracy Program at the Brennan Center for Justice at NYU School of Law.
Even before the Supreme Court announced that it would hear oral argument in Gill v. Whitford, a conventional wisdom of sorts about the case had settled in: This case would be one of the most important of the court’s appeals for the October 2017 term and perhaps one of the most significant democracy cases in a decade or more. It would either be an opportunity for the court to rid us once and for all of the scourge of partisan gerrymandering in all its forms, or end for all time the search for a partisan-gerrymandering cause of action.
This case will be important, no doubt. But, in reality, the challenge it presents the Supreme Court is a much more modest one.
Far from being a referendum on the American redistricting process writ large, this case asks the Supreme Court to weigh in on the constitutionality of one specific type of gerrymandering: a political party’s use of the redistricting process to net and entrench an unbreakable legislative majority that it couldn’t command without extreme manipulations of the electoral map. (This is the type of abuse a majority of the court seemed to have in mind in 2015 when, in Arizona State Legislature v. Ariz. Independent Redistricting Commission, it described “partisan gerrymandering” as “the drawing of legislative district lines to subordinate adherents of one political party and entrench a rival party in power.”)
An extreme, seat-maximizing gerrymander is exactly what occurred in Wisconsin. In 2010, Wisconsin Republicans won fortuitous majorities in the fall elections. They then used that control to create a map for state assembly elections that would guarantee them large legislative majorities even with a minority of the statewide vote, and, crucially, deny their Democratic opponents the same opportunity. They did this intentionally. And they succeeded. Consultants and legislative aides – supervised by the leaders of the state’s Republican caucus – worked away in an off-site “map room” to engineer maps with the aid of sophisticated social-science techniques. Legislative Democrats were entirely excluded from the mapping process. Even rank-and-file Republicans were largely left in the dark, shown only information relating to their specific districts and only after signing nondisclosure agreements. The maps were then rapidly pushed through both houses of the legislature and signed into law by the state’s Republican governor.
By cementing in majorities – or sometimes even supermajorities – that allow lawmakers to pursue their agendas without regard for the changing tide of public opinion, these extreme gerrymanders rob voters of their right to accountable legislatures. When states like Wisconsin or North Carolina, which have vibrant political cultures that frequently produce close statewide elections and switches in party control of statewide seats, are locked into the same legislative slates dominated by one party, voters also lose their right to a representative government. And when mapmakers intentionally use political data to create these kinds of maps – disadvantaging voters on the basis of their political expression and affiliation and undercutting their ability to aggregate their votes to elect legislators of their choice – they likewise undermine the First and 14th Amendments.
The good news is that targeting the kind of extreme gerrymandering at issue in this case doesn’t carry the threat of judicial intervention into maps everywhere. Extreme gerrymandering is a problem in only a handful of states at the congressional level, and under a dozen at the state legislative level. Under these circumstances, any fear of a flood of new redistricting litigation isn’t a viable reason – let alone an excuse – for courts to do nothing. Instead, it’s an incentive to define the problem being addressed clearly and vet the elements of a constitutional offense rigorously.
Crucially, courts don’t need to rely on social science to police extreme maps. The district court in this case didn’t. The various metrics that academics have developed and litigants are now deploying frequently correlate with observable, objective, real-world factors. Courts could easily apply these factors to place meaningful limits on partisan gerrymandering (and partisan-gerrymandering causes of action). The list of these factors is potentially long, but a few stand out as particularly administrable and useful for limiting the total number of maps that would be subject to constitutional challenge. And the Supreme Court would have broad latitude to figure out how, exactly, to factor these into its doctrine.
The most important factors are single-party control of the redistricting process and a recent history of close – or increasingly close – statewide elections. When both of these factors are present – as they are in Wisconsin – a party is more likely to attempt to entrench itself and that attempt is more likely to work. Think of it as a kind of “motive and opportunity” analysis. When statewide elections aren’t close, the party in control of the redistricting process isn’t likely to feel the status anxiety necessary to justify complicated redistricting machinations. (The state probably also won’t have the kind of political geography necessary for the party to eke out a large number of the close seats that create bias.) Politics as usual should naturally produce the outcomes the party wants. Similarly, a party will likely only be able to force through an extreme gerrymander when the other parties don’t have some procedural check on the process. If opposition parties can veto the worst gerrymanders, courts can feel comfortable that normal politics will have weeded out deeply biased maps.
Other readily identifiable and objective evidence of entrenchment could also be helpful for flagging the worst maps. Chief among these are deviations from normal districting processes, including excessive secrecy or speed. We saw these irregularities in spades during Wisconsin’s redistricting process. If redistricting proceeds relatively deliberately and transparently, maps are likely to be less biased and courts need be less concerned that the map-drawers overrode normal politics to get their way.
Contemporary social-science metrics for measuring partisan symmetry in maps – which have become more sophisticated since the Supreme Court last considered partisan gerrymandering in the mid-2000s – can also help the courts identify situations in which something is likely awry and needs closer examination. These metrics are one set of tools among many the courts have at their disposal, much as doctors have an array of techniques for diagnosing illnesses, such as lab tests, physical exams and patient interviews. They can provide both map-drawers and prospective plaintiffs with some ex ante guidance as to which maps might be constitutionally infirm. The deeper methodological differences among the leading metrics aside, they converge remarkably frequently on the same small sets of extremely biased plans. Additional statistical applications and simulated alternative mapping programs provide extra layers of defense against meritless claims by helping courts identify statistically significant bias and filter out the effects — if any — of residential clustering on structural inequalities in voters’ ability to convert votes into seats.
A Supreme Court that defined the problem of partisan gerrymandering and its solution in the limited way sketched out here would be able to box out the worst kinds of redistricting abuses in a manageable and discernible manner, primarily by establishing the outer bounds of constitutional behavior. States would still have substantial latitude within these bounds to run their redistricting processes as they saw fit and would be able to make nuanced choices from among many possible map configurations without fear of judicial interference. If the social science is any indication, most already do so without generating bad maps.
Partisan gerrymandering is a major problem in many of its forms. But in considering Gill v. Whitford, the Supreme Court should think smaller, focusing on the particular problem of Wisconsin’s extreme map. Everything follows from that.
Digital technology mediates our public and private lives. That makes computer science a powerful discipline, but it also means that ethical considerations are essential in the development of these technologies. Not all new developments may be welcomed by users, such as a patent application by Facebook that enables the company to identify their users’ emotions through cameras on their devices. A critical approach to developing digital technologies, guided by philosophical and ethical principles, will allow interventions that improve society in meaningful ways.
The Center for Information Technology Policy recently organized a conference to discuss research ethics in different computer science communities, such as machine learning, security, and Internet measurement. This blog post is the first in a series that summarizes and builds on the panel discussions at the conference.
Prof. Arvind Narayanan points out that computer science sub-communities have traditionally developed their own community standards about what is considered to be ethical. See for example responsible vulnerability disclosure standards in information security, or the Menlo Report for the Internet measurement discipline. This allows norms and standards to be tailored to the needs of sub-disciplines. However, the increasing responsibilities of researchers and sub-communities, arising from the increasing power and reach of computer science, are sometimes met with confusion. There is a tendency to see ethical considerations as a “policy issue” to be dealt with by others.
Prof. Melissa Lane of the University Center for Human Values points out that while ethics is rooted in understanding community standards and norms, these do not exhaust it, as some researchers in computer science and other fields can sometimes be tempted to think. Rather, the academic study of ethics provides the tools to critically reflect on these norms and challenge existing and new practices. A meaningful computer science research ethics therefore does not just translate existing norms into functional requirements, but explores how values are enabled, operationalized, or stifled through technology. A careful analysis of a particular context may even uncover new values that were previously taken for granted or not even considered to be a norm. Think, for example, of “disattendability” — the idea of going about your business without anyone tracking you or paying attention to you. We usually take this for granted in the physical world, but on the Internet, ad trackers, among others, actively violate this norm on an ongoing basis. By understanding the effects of design choices and methodologies, ethics guides technology designers to choose the most appropriate approach among the available alternatives.
Ethics is known for its somewhat conflicting theories, such as consequentialism (“Ends justify the Means”) and deontology (“Act in such a way that you treat humanity […] never merely as a means to an end, but always at the same time as an end”). Prof. Susan Brison cautions against an approach that simply takes an ethical theory and applies it to a technology. She raised the question whether computer science research and data science may require new types of ethics, or evolved theories. Digital data is changing the underlying properties of information, whereby our traditional ways of thinking are being challenged in important ways. For example, micro-targeting of bespoke political messages to individuals circumvents the ability to let ‘good speech’ drown out ‘bad speech’, which is a foundational idea for the concept of freedom of speech.
In my research, I’ve found that ethical guidelines can be incomplete, inaccessible, or conflicting, and existing legal statutes from previous technological eras may not be directly applicable to current technology. This has resulted in computer science communities being somewhat confused about their ethical and legal responsibilities. The upcoming posts in this series will explore some of the ethical standards in machine learning, security, algorithmic transparency, and Internet measurement. We welcome any feedback to move this discussion forward at a crucial time for the ethics of computer science.
See the introduction to the conference here.
Read more of this story at Slashdot.
- At Fa on First, Wen Fa urges the Supreme Court to review a challenge to a Minnesota law that “prohibits voters from wearing political apparel at the polling place,” arguing that “[b]y criminalizing all sorts of shirts, buttons, and badges, Minnesota has essentially created speech-free zones at polling places across the State” and that a “favorable ruling from the Supreme Court would vindicate the First Amendment rights of voters nationwide.”
- At NBC News, Julie Moreau reports on the results of a recent study indicating that “the 2015 Obergefell v. Hodges ruling that legalized same-sex marriage … helped to shift Americans’ perception of social norms in support of same-sex marriage,” research that she states may be relevant as the Supreme Court hears upcoming “cases related to anti-LGBTQ discrimination.”
- In The Washington Examiner, Ryan Lovelace reports that a “nonprofit led by a lawyer for President Trump, Jay Sekulow, is asking the Supreme Court to review a federal court’s blocking of the publication of surreptitiously recorded videos involving abortion providers.”
- In an analysis for The Washington Post’s Monkey Cage blog, Bernard Grofman and German Feierherd look at how other countries conduct legislative redistricting as the Supreme Court prepares to consider “the much-anticipated Gill v. Whitford,” which “brings up the hot-button question of whether a state legislature may draw electoral districts that favor one party over another”; they conclude that “redistricting looks quite different elsewhere, for several reasons.”
Remember, we rely exclusively on our readers to send us links for our round-up. If you have or know of a recent (published in the last two or three days) article, post, or op-ed relating to the Court that you’d like us to consider for inclusion in the round-up, please send it to roundup [at] scotusblog.com.
Read more of this story at Slashdot.
The petition of the day is:Sveen v. Melin 16-1432
Issue: Whether the application of a revocation-upon-divorce statute to a contract signed before the statute’s enactment violates the contracts clause.
Funk, Kellen R. and Mullen, Lincoln A., The Spine of American Law: Digital Text Analysis and U.S. Legal Practice (July 12, 2017). American Historical Review (February 2018). Available at SSRN: https:/srn.com/abstract=3001377
“In the second half of the nineteenth century, the majority of U.S. states adopted a novel code of legal practice for their civil courts. Legal scholars have long recognized the influence of the New York lawyer David Dudley Field on American legal codification, but tracing the influence of Field’s code of civil procedure with precision across some 30,000 pages of statutes is a daunting task. By adapting methods of digital text analysis to observe text reuse in legal sources, this article provides a methodological guide to show how the evolution of law can be studied at a macro level—across many codes and jurisdictions—and at a micro level—regulation by regulation. Applying these techniques to the Field Code and its emulators, we show that by a combination of creditors’ remedies the code exchanged the rhythms of agriculture for those of merchant capitalism. Archival research confirmed that the spread of the Field Code united the American South and American West in one Greater Reconstruction. Instead of just a national political development centered in Washington, we show that Reconstruction was also a state-level legal development centered on a procedure code from the Empire State of finance capitalism.”
TheNextWeb: “The latest Global Digital Statshot from We Are Social and Hootsuite reveals that the number of people using social media around the world has just passed the momentous three billion mark..”
STARNet – “We have distributed over 2 million free eclipse glasses and 4,000 education kits to over 7,000 library locations (public libraries, state libraries, book mobiles, tribal libraries). This represents nearly one half of all libraries in the country. To find a library in your area, zoom in on the interactive map… then click on a drop pin for contact information. Visit our Eclipse Resource Center for more information!”