Monica Lewinski. How desperate for attention are you, honey?

She’s constantly on the air, trying like hell to stay in the media spotlight, telling her story of how awful it was to be caught in the media spotlight. And you fucking people don’t see how stupid and transparent that is? Or you don’t care, because your viewership is just as fucking stupid and they can’t get enough of it? Christ on a stick, it’s just stupid on the face of it. Go away, Monica. Please. Go. Away.

ATVs taking over the streets and neighborhoods of Nashville, illegally

…and although there are a hundred articles about it, no one seems interested in posing the “why?” question. No one in a single fucking article has even broached it. I mean… what? Is that not the very first, natural question on your mind? Well, um, why the fuck isn’t it?

Ordinarily, when a person uses a vehicle like a weapon against a cop, that person gets shot. I’m not understanding any of this story, either it’s origins or the puzzling ineptitude of law enforcement in the face of it. No one seems to be posing normal, natural questions about it. That’s a red flag in my mind that something is wrong, but until just one journalist asks the one right question and does their fucking job, I guess I’ll be the only one to notice it. I’m not? Then – again – wtf?

Seems like they have the legal right to gun them down if they are destroying property and endangering people. If the police can’t or won’t do what they ought, then the well-armed citizens of Nashville should take it as their duty to shoot these fuckers on sight. What? You gonna lose sleep over an out-of-control and dangerous wild animal being put down? Not. Me.

Some Humans Can Sense Earth’s Magnetic Field, Fascinating Experiment Suggests

The ability to sense the Earth’s magnetic field—a trait known as magnetoreception—is well documented among many animals, but researchers have struggled to show that humans are also capable of the feat. Until now.

New experimental evidence published today in the science journal eNeuro suggests the human brain is capable of responding to the Earth’s magnetic field, though at an unconscious level. It’s not clear if our apparent ability to sense the magnetic field is in any way useful, as it’s likely a vestigial trait left over from our more primitive past. Giving the new finding, however, researchers should investigate further to determine if magnetoreception is somehow contributing to our behavior or abilities, such as spatial orientation.

Magnetoreception is found among both invertebrates and vertebrates, and it’s probably a capacity that’s been around for a very long time. Some bacteria and protozoans exhibit magnetoreception, as do some migratory birds and sea turtles, who use the added sense to assist with navigation. Dogs are also sensitive to the Earth’s magnetic field, orienting their bodies along the North-South axis when they poop.“There is no such thing as ‘extra-sensory perception’. What we have shown is this is a proper sensory system in humans, just like it is in many animals.”

Around 30 years ago, scientists tried to determine if humans have a similar capacity, but to no avail. These pioneering efforts produced results that were either inconclusive or unreproducible, so scientists largely gave up, figuring magnetoreception is something outside the human realm. In the years that followed, work on animals increasingly pointed to magnetoreception as the result of complex neurological processing—a possibility that motivated Caltech geophysicist Joseph Kirschvink and neuroscientist Shin Shimojo to revisit the issue.

“Our approach was to focus on brainwave activity alone,” Kirschvink told Gizmodo. “If the brain is not responding to the magnetic field, then there is no way that the magnetic field can influence someone’s behavior. The brain must first perceive something in order to act on it—there is no such thing as ‘extra-sensory perception.’ What we have shown is this is a proper sensory system in humans, just like it is in many animals.”

To test whether the human brain is capable of magnetoreception, and to do so in a reliable, believable manner, Kirschvink and Shimojo set up a rather elaborate experiment involving a chamber specially designed to filter out any extraneous interference that might influence the results.

The isolated chamber, within which participants had their brainwaves monitored by electroencephalogram (EEG), was housed inside a Faraday Cage, which shielded all interior contents from external electromagnetic fields. Three orthogonal sets of square coils, called Merritt coils, allowed the researchers to control the ambient magnetic fields around a participant’s head. Acoustic panels on the wall reduced external noise from the building, while a wooden chair and isolated floor prevented any unwanted interference with the magnetic coils. A battery-powered EEG was placed next to the participant, which was connected to a computer in another room with an optical fiber cable.

During carefully controlled experiments, participants sat upright in the chair with their heads positioned near the center of the magnetic field, while EEG data was collected from 64 electrodes. The hour-long tests, in which the direction of the magnetic fields were rotated repeatedly, were performed in total darkness. The experiment involved 34 adult volunteers, who collectively participated in hundreds of trials; all tests were done in a double blind manner, and control groups were also included.

After the experiments, none of the participants said they could tell when or if any change to the magnetic field had occurred. But for four of the 34 participants, the EEG data told a different story.

As noted in the new study, the researchers recorded “a strong, specific human brain response” to simulated “rotations of Earth-strength magnetic fields.” Specifically, the magnetic stimulation caused a drop in the amplitude of EEG alpha waves between 8 and 13 Hertz—a response shown to be repeatable among those four participants, even months afterward. Two simple rotations of the magnetic field appeared to trigger the response—movements comparable to a person nodding their head up or down, or turning it from left to right.

The alpha rhythm is the dominant brain wave produced by neurons when individuals aren’t processing any specific sensory information or performing a specific task. When “stimulus is suddenly introduced and processed by the brain, the alpha rhythm generally decreases,” the authors wrote. The drop in alpha waves observed during these experiments suggested the brain interpreted the magnetic fields as some kind of stimulus—the neurological purpose or result of which is unclear. But as the new study pointed out, this observation now “provides a basis to start the behavioral exploration of human magnetoreception.”

The researchers don’t know how the human brain is able to sense magnetic fields, but Kirschvink has a favorite theory. There may be “specialized sensory cells that contain tiny magnetite crystals,” he said, which is currently “the only theory that explains all of the results, and for which there is direct physiological data in animals.” Back in 1992, Kirschvink and his colleagues isolated crystals of biogenic magnetite from human brains, so he may be onto something; other researchers should now dive into this possibility to flesh this idea out.

“Magnetoreception is a normal sensory system in animals, just like vision, hearing, touch, taste, smell, gravity, temperature, and many others,” Kirschvink told Gizmodo. “All of these systems have specific cells that detect the photon, sound wave, or whatever, and send signals from them to the brain, as does a microphone or video camera connected to a computer. But without the software in the computer, the microphone or video camera will not work. We are saying that human neurophysiology evolved with a magnetometer—most likely based on magnetite—and the brain has extensive software to process the signals.”

Looking ahead, Kirschvink would like to better understand the biophysics of this capacity, including measuring threshold sensitives. Shimojo believes it might be possible to bring magnetoreception into conscious awareness, a possibility that could spawn entirely new directions of research. Imagine, for example, if future humans had a built-in compass, allowing them to sense magnetic north.

Michael Winklhofer from the Institute of Biology and Environmental Sciences at Carl von Ossietzky University of Oldenburg, liked the new study, saying the authors “did everything to rule out artifacts [noise] which could easily occur during recording electrical brain activity in a changing magnetic field.” Also, the description of the setup and methods was so detailed that the study can be easily replicated, he said.

“For the first time in humans, clear responses to magnetic field changes were observed. Even though the magnetic field was not consciously perceived in the test persons that showed brain responses to the field, the study invites [other scientists] to follow up research to understand the mechanism by which the magnetic field elicits neuronal activity,” Winklhofer told Gizmodo.

Biologist Kenneth J. Lohmann from the University of North Carolina at Chapel Hill said it was a “fascinating and provocative study.” Given that “a number of other animals can sense Earth’s magnetic field, it is certainly within the realm of possibility that humans can as well,” he told Gizmodo. That said, he believed the results should be interpreted with “great caution.”

“It is one thing to find a subtle change in brain activity in response to a weak magnetic field, and another thing to show that people really detect and use magnetic field information in a meaningful way,” said Lohmann.

Indeed, for now we’ll have to be content with the observation that human brains can detect magnetic waves, and leave it at that. Researchers will now have to figure out why human magnetoreception exists, and if this capacity somehow extends to our behavior. Regardless, we can look forward to some exciting new science in the future.

[Stolen from: https://gizmodo.com/fascinating-experiment-suggests-some-humans-can-sense-e-1833377029]

Health care for all

They say that it could cost 30 trillion dollars to provide basic health care for free to all American citizens. They also say that the health care system we have will cost us 50 trillion, but I’m not canny enough on the numbers and nuances to make sense of the matter that way.

My question is whether you, personally, are willing to stand up and say, “You are not my kin, or friend, and I don’t mind that sickness, privation, and early death await you if I even suspect that my own way of life or creature comforts are threatened to any degree, or indeed, even if I suspect as much without any real understanding of what costs or benefits may actually accrue to me and mine personally from such a change to the status quo.”

You pretty much have to be a total douchebag to hold that position, and it’s also fairly transparently douchey. Inhuman, actually. But this is the position you pretty much must be holding, America. It’s just logical consequence. It’s what precipitates out of the collective behavior; manifest.

James Gunn reinstated to helm Guardians of the Galaxy 3

Told you so, bitches. Feckless, fearful execs stepped on their dicks trying to separate from Gunn, but they received blowback from cast members (ones with character, similarly unencumbered by the fear of being politically incorrect), fans, and anyone with any fucking sense. They never talked to anyone else about the job. They probably knew right away they would have to hire him back if they wanted to wring more money out of that franchise. They waited until the heat was focused elsewhere, then did what they had to do. I told you so, but more importantly, what is right never actually goes out of style, and it was never right to fire him. I’m glad. Guardians 2 was a genuine work of art masquerading as a mere pop-culture bauble.

Don’t be afraid to stand up to the Morality Police. Gunn didn’t stand up for himself, but he did make a great movie. That means I respect him as a movie-maker and not so much as a man, but I’ll still call this a win. Go ahead and hate Disney’s decision. I can’t enjoy it properly if you don’t hate them for it.

The People Who Hated the Web Even Before Facebook

Thirty years ago this week, the British scientist Tim Berners-Lee invented the World Wide Web at CERN, the European scientific-research center. Suffice it to say, the idea took off. The web made it easy for everyday people to create and link together pages on what was then a small network. The programming language was simple, and publishing was as painless as uploading something to a server with a few tags in it.

There was real and democratic and liberatory potential, and so it’s not at all surprising that people—not least Berners-Lee himself—are choosing to remember and celebrate this era. This was the time before social media and FAANG supremacy and platform capitalism, when the internet was not nearly as dependent on surveillance and advertising as it is now. Attention was more widely distributed. The web broke the broadcast and print media’s hold on the distribution of stories. HTML felt like a revolution.

Not to everyone, though. Just a few years after the internet’s creation, a vociferous set of critics—most notably in Resisting the Virtual Life, a 1995 anthology published by City Lights Books—rose to challenge the ideas that underlay the technology, as previous groups had done with other, earlier technologies. This wasn’t the humbuggery of Clifford Stoll’s Newsweek essay arguing that the internet basically sucked. These were deeper criticisms about the kind of society that was building the internet, and how the dominant values of that culture, once encoded into the network, would generate new forms of oppression and suffering, at home and abroad.

Resisting the Virtual Life assails “the new machinery of domination,” contemplates an “ungovernable world,” considers the discriminatory possibilities of data harvesting, catalogs the unfairness of gender online, examines the “masculinist world of software engineers,” laments the “reduction of public space,” speculates on “the shape of truth to come,” and even proposes a democratic way forward. Its essays foresaw the economic instability the internet might bring, how “the cult of the boy engineer” would come to pervade everyone’s life, and the implications of the creation of huge amounts of personal data for corporate processing. “What could go wrong with the web?” the authors asked. The answer they found was: A lot. They called themselves “the resistance.”

This was before Jeff Bezos was the richest man in the world. It was before Facebook, before the iPhone, before Web 2.0, before Google went public, before the dot-com bust, before the dot-com bubble, before almost anyone outside Finland textedEighteen million American homes were “online” in the sense that they had America Online, Prodigy, or CompuServe, but according to the Pew Research Center, only 3 percent had ever seen the webAmazon, eBay, and Craigslist had just launched. But the critiques in Resisting the Virtual Lifeare now commonplace. You hear them about FacebookAmazonGoogleApple, the venture-backed start-up ecosystemartificial intelligenceself-driving cars—even though the internet of 1995 bears almost no resemblance, technically or institutionally, to the internet of 2019.

Maybe as a major technological movement begins to accelerate—but before its language, corporate power, and political economics begin to warp reality—a brief moment occurs when critics see the full and awful potential of whatever’s coming into the world. No, the new technology will not bring better living (at least not only that). There will be losers. Oppression will worm its way into even the most seemingly liberating spaces. The noncommercial will become hooked to a vast profit machine. People of color will be discriminated against in new ways. Women will have new labors on top of the old ones. The horror-show recombination of old systems and cultures with new technological surfaces and innards is visible, like the half-destroyed robot face of Arnold Schwarzenegger in Terminator 2.

Then, if money and people really start to pour into the technology, the resistance will be swept away, left dusty and coughing as what gets called progress rushes on.


In the post-2016 world of the left, socialism is back and computers are bad. But computers have been bad before, and, not coincidentally, when various socialisms were popular.

Long before the internet and Resisting the Virtual Life, people fought the very idea of computers—mainframes, initially—beginning with the 1960s student movements. It wasn’t pure Luddism; computers were, quite literally, war machines. At Stanford, then a hotbed of radicalism, students staged sit-ins and occupied administration buildings. Even as the Vietnam War ebbed, many on the left worried that technology in the form of computerization and automation was destroying working-class jobs, helping bosses crush unions, and making work life worse for those who remained employed.

But as the 1970s crept into the 1980s, some of the military-industrial stank began to rub off. A computer that spit out Vietnam War predictions for Robert McNamara was one thing, but what about a network of computers that let anyone wander a digital frontier, associating with whomever they wanted beyond national borders or established identities? The meaning of networked computing began to change. These 1s and 0s could be bent to freedom.

“To a generation that had grown up in a world beset by massive armies and by the threat of nuclear holocaust, the cybernetic notion of the globe as a single, interlinked pattern of information was deeply comforting: in the invisible play of information, many thought they could see the possibility of global harmony,” wrote Fred Turner in From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism.

Turner’s book begins with a question: “How did the cultural meaning of information technology shift so drastically” from the Vietnam War protest days to the beginning of the dot-com boom? And his answer is that a set of figures in the Bay Area, led by Stewart Brand, who founded the Whole Earth Catalog, transformed the notion of computers from military-industrial infrastructure to personal tool through the 1970s.

Brand positioned these technologies as boons not for bureaucrats calculating missile trajectories, but more for hacker-freaks planning winning video-game maneuvers. In Rolling Stone, he declared the arrival of computing “good news, maybe the best since psychedelics.”

It helped that the United States had entered a period the historian Daniel Rodgers has called “the age of fracture.” American institutions, collectivities, and solidarities broke down in favor of a wildly individualist model of consumer action. “One heard less about society, history, and power and more about individuals, contingency, and choice,” Rodgers wrote. “The importance of economic institutions gave way to notions of flexible and instantly acting markets.”

The world was a place for individuals to make choices, and what they needed to make better choices was more information. Information was stored in computers, and therefore, networking individual people to one another would lead to new forms of collective action.

Apple and its charismatic salesman Steve Jobs were there to commercialize this new idea of the computer. Liberal technology enthusiasts such as Al Gore and conservative technology enthusiasts such as Newt Gingrich joined the movement to create a new consensus that the only role of government in the industry would be to create an environment friendly to the development of internet businesses and niche communities alike.

So when Berners-Lee wrote his 1989 proposal for the web, the world was ready. Tellingly, an institutional breakdown motivated his desire for a hypertext system. People kept leaving CERN and taking information with them. Organizational memory was lacking. At the same time, systems for creating that memory required that people agree to certain hierarchies of information and keyword taxonomies, which they were loathe to do. His answer to this problem became a radically individual one: Anyone could create a page and link to anything. Get enough people doing so, and the flywheel of content creation would keep spinning. No institutions required. No holds barred. This was personal freedom, as implemented in a network protocol.

The internet’s early proponents saw all this potential. They gushed in the pages of Wired, headquartered south of Market Street, in San Francisco, long a down-and-out part of town. But across Market and up Columbus Avenue, in the heart of North Beach, where aging beatniks still had some small purchase, the poets and writers of City Lights Bookstore were not swayed.


“There are alternatives to the capitalist utopia of total communication, suppressed class struggle, and ever-increasing profits and control that forgets rather than resolves the central problems of our society,” wrote James Brook and Iain Boal, the editors of Resisting the Virtual Life. Those problems were obvious: “people sorted into enclaves and ghettos, growing class and racial antagonisms, declining public services (including schools, libraries, and transportation), unemployment caused by automation and wandering capital, and so on.”

And yet, for most people, the personal computer and the emerging internet obscured the underlying structural forces of society. “‘Personal’ computers and CD-ROMs circulate as fetishes for worshipers of ‘the free market’ and ‘the free flow of information,’” Brook and Boal wrote.

They knew they were up against “much—one could say ‘everything’—” in trying to assemble the resistance to the explosion of the internet. But their goal was not necessarily to win, but rather to “address an almost unnameable object—‘information age,’ ‘information superhighway,’ ‘cyberspace,’ ‘virtuality,’ and the proliferating variants—from a critical democratic perspective.”

It’s almost like they wanted to mark for future generations that there were people—all kinds of different people—who saw the problems. “Resisting the Virtual Life intends to provide correctives more profound than those generated by the cybernetic feedback mechanisms of ‘the marketplace of ideas,’” the editors wrote, “where scandalous defects are always answered by pseudocritiques that assure us that all is well, except for the inevitable bugs that the system itself will fix.”

The essays in the book are uneven, as you might expect. But some of them are stunningly prescient. In “It’s the Discrimination, Stupid!” which reads as a prequel to 2018’s The Age of Surveillance Capitalism, the University of Southern California professor Oscar H. Gandy Jr. argues that “personal information is used to determine our life changes in our role as citizens as well as in our lives as employees and consumers.” In a powerful reflection on the landscape of Silicon Valley, Rebecca Solnit concludes that, as a place, it is a nowhere, but one linked by supply chains to changes across the globe. The University of California at San Diego communications professor and media critic Herbert Schiller points out how the internet could reduce the power of nation-states, weakening them while transnational corporations grow stronger. Could electronically enhanced armies hold the people down, he writes, “while privately initiated economic forces are contributing to wildly disproportionate income distribution and gravely distorted resource utilization, locally and globally?”

And Ellen Ullman, who has continued to critique the tech world from the inside, might have made the most perfect critique of how the human desire for convenience would rule how technology was seen. “The computer is about to enter our lives like blood in the capillaries,” she wrote. “Soon, everywhere we look, we will see pretty, idiot-proof interfaces designed to make us say, ‘OK.’”

The on-demand economy would rule. “We don’t need to involve anyone else in the satisfaction of our needs,” she wrote. “We don’t even have to talk.” Smuggled inside these programs would be “the cult of the boy engineer,” “alone, out-of-time, disdainful of anyone far from the machine.”

If these critics from another era seem to have cataloged all that could go wrong, they also had a sense that things could go differently. The writer and historian Chris Carlsson, for example, saw hope in the organizing potential of online communities. “The threads of subversion we weave so quietly today must find their way to transform the self-destructive, brutal, and dehumanizing lives we lead at work, at school, and in the streets,” he wrote. “The trust we place in electronic links must again find a common home among our social links, until electronic ‘experiences’ take their rightful place as supplements to a rich, varied human life.”

Then again, he acknowledged that “it’s easier to imagine a lot of empty pointless verbiage flying around the electronic world, matched only by the piles of data gathered by our corporate and governmental institutions.”


Since the 2016 election—both its course and its outcome—Americans have been engaged in a newfound struggle to understand how the internet has changed their country and the world. It’s simply not possible to celebrate the birth of the web without acknowledging that the cyber-utopia never arrived. Look at many tech titans’ behavior over the past few years and you will see both scandalous defects as well as “pseudocritques that assure us all is well.”

In this long moment of reevaluation, the industry and its products have come under attack from almost every angle: inside and outside, local and global, economic and social, legislative and rhetorical, capitalist and socialist. That hasn’t stopped the profits from raining down. The biggest tech companies are among the top 10 most valuable companies in the world. According to one ranking of brand equity, the four strongest brands in the world are Apple, Google, Amazon, and Microsoft. But once the consensus dropped away that internet technology was equal to progress, people inside and outside the industry found an unending fount of questionable practices. People treat their phone as they once did cigarettes.

As diagnoses are reached and suggestions made, these early critiques are worth remembering precisely to inoculate against the nostalgia for an earlier time. The seeds of our current technologically mediated problems were obvious to critics with the eyes to see them in 1995.

Examining the history of the web might yield a better internet in the future, but not by looking only at what we loved about the early days. The premises of the early web—valorizing individual choice, maximalist free speech, and dispersed virtual networks; ignoring institutional power and traditional politics—might require revision to build a new, prosocial web.

[Stolen from: https://www.theatlantic.com/technology/archive/2019/03/people-who-hated-web-even-before-facebook/584932/]

There are no adults in the room.

Iran’s foreign minister says bigotry in Western countries has led to the attacks on Muslims in New Zealand.

If Iran had a flood of Christian, Caucasian immigrants, and mounting pressure to change their society RIGHT NOW, you’d see exactly the same thing over there. Bigotry? It’s a human foible. Not a Western one. You fucking people and your lack of executive judgment are just appalling. This guy is supposed to represent the “best and brightest”, right? We’re doomed.

Look at Pakistan! They shoot up schools with children even when they are the same ethnic group and same religion! There is no end of bigotry and violence!

Tell the truth or die in a fire. All of you. I am on the side of fearless truth-tellers. There aren’t many of us, apparently, and none of us in the halls of power.

Humans are not infinitely malleable

It’s no good to sit on your little perch and say “people shouldn’t be bigots”. That’s exactly as useful as saying “It shouldn’t rain.” Humans are human animals. You think you are above it? I’m sure I could show you at least five ways in which you currently think and behave reflexively, irrationally, like the animal that you are. We reach for the comfort of categories under stress, and one of the first is “people like X do things like this”. No, honey. And no, just because a person looks like you, wears your clothes, and shares your labels, does not mean they think and feel as you do, or that either you or they are incapable of shocking behavior. The whole point of all the labels and abstractions is to create a layer of insulation between your psychology and emotions and the facts that trouble them. For example, we are capable of murder. Any of us. Lie to yourself to make yourself feel better, and escape the harsh light of truth. But know that some of us aren’t as weak and will not hide from the truth. When you find that your prescriptions are met with resistance from the likes of me, it will be because you are unable to think clearly, desperate as you are to maintain the delusions that preserve your sense of safety.

The Christchurch Massacre

Update: I was just flat wrong about those weapons already being banned in New Zealand, and went off half-cocked about it. The thing about it is, if you ignore how the shooters weapons looked, but focused on how they operated, then you’d find that they are no more deadly than the long guns you can purchase at Walmart. They just look mean. A certain kind of person is interested in that for the same reason they are interested in a truck front-end that “looks mean”, etc. Insecurity. It’s fucking everywhere. The discomfort of having to accept that you don’t have power over others is a particularly ugly form of insecurity, but it’s the one that animates the highly-political and the partisan.

Apparently, the gunman also posted “Subscribe to PewDiePie” before taking off to kill Muslim “invaders”. Pretty much exactly what I thought the irrational attraction to PewDiePie was and still is – the wordless, idiotic, reflexive call of biology on a mind that doesn’t have much else going on by way of agency. Only an idiot is “proud” of being white, or anything else they had no power or control over. You don’t find identity by matching your skin tone to others, or your religion. That’s pure weakness, because it’s using the same thoughtless, and again – reflexive – low-level motivations that are completely unexamined and lazy. No, if you think you are a proponent of Western Culture, then what you fight for is Individualism. That means you are a race of one. A religion of one. Your identity isn’t based on things that you had no hand in. Most people are idiots, and we are all insecure. It’s the insecurity that drives us to gather into groups and then make ideological statements of shared identity, and we do that to fool ourselves into feeling more comfortable and less insecure around strangers, since, after all, they wear the same shared identity, right? They must be just like you, right? Idiots. Self-knowledge is really valuable because it keeps you from going off the rails into this particular kind of stupidity. If you don’t value and work toward Individualism, which is nothing more than individual development, period, then you will forever potentially be a tool in the hands of others who can use your failure to develop as an individual like a little handle on your head, turning it this way and that.

Oh, by the way, guns are banned in Australia and New Zealand. That means this is exactly one of those cases that proves that bad actors will find a way to get guns, and in this case, there were no Good Guys With Guns to respond in time to the shooter. If your politics make this too difficult a truth for you, then good! Error always stings before it lets you go. This happened in a country which has all of the gun restrictions you could want in place. Just pause over it and let it sink in, because this is truth. It’s phenomenal and real, and not an exercise in abstraction.

And, by the way, AOC apparently is just a craven, batshit harpy in search of power, which is exactly what she looks and sounds like. “What use are thoughts and prayers?” Well, what use is banning guns, honey?

Open source advances deeper into hardware: The CHIPS Alliance project

Open-source hardware is older than you might think. Sun released OpenSPARC in 2007, and IBM started OpenPOWER in 2013. OpenSPARC would die after Oracle bought Sun, and OpenPOWER remains largely IBM-driven. With the recent arrival of the RISC-V (pronounced Risk-Five), though, open-source CPU designs have finally caught fire. Now, the Linux Foundation is helping form the CHIPS Alliance project. CHIPS, in turn, will host and curate high-quality, open-source silicon device design code.

Backed by Esperanto, Google, SiFive, and Western Digital, the CHIPS Alliance will foster a collaborative environment for creating and deploying new open-source chip designs. These will be used across the entire spectrum of computing. This will include mobile, computing, consumer electronics, and Internet of Things (IoT) chip and System on a Chip (SoC) designs.

While these early CHIPS Alliance backers are all committed to the RISC-V architecture, they want to move beyond just RISC-V. They’re creating an independent entity for companies and individuals to collaborate and contribute resources to make open-source CPU chip and SOC design more accessible.

“Open collaboration has repeatedly proven to help industries accelerate time to market, achieve long-term maintainability, and create de facto standards,” said Mike Dolan, the Linux Foundation’s vice president of strategic programs, said in a statement. “The same collaboration model applies to the hardware in a system, just as it does to software components. We are eager to host the CHIPS Alliance and invite more organizations to join the initiative to help propel collaborative innovation within the CPU and SoC markets.”

Each of these early member companies are open-sourcing a wide variety of technologies. These include:

  • Google: A Universal Verification Methodology (UVM) based instruction stream generator environment for RISC-V cores. This environment provides configurable, highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs. I should also point out that Google has been hiring chip engineers in recent months.
  • SiFive: RocketChip SoC generator; TileLink interconnect fabric; Chisel, a new open-source hardware description language; and the FIRRTL intermediate representation specification and transformation toolkit. The company, founded by RISC-V’s inventors, will also contribute and maintain Diplomacy, the SoC parameter negotiation framework.
  • Western Digital: Its high performance, 9-stage, dual issue, 32-bit SweRV core, together with a test bench, and high performance SweRV instruction set simulator. WD will also contribute the specification and early implementations of OmniXtend cache coherence protocol.

Looking ahead, in a statement, Dr. Amir Salek, Google Cloud’s senior director of Technical Infrastructure, said: “We are entering a new golden age of computer architecture highlighted by accelerators, rapid hardware development and open source architecture and implementations. The CHIPS Alliance will provide the support and framework needed to nurture a vibrant open-source hardware ecosystem for high-quality, well-verified and documented components to accelerate and simplify chip design.”

Specifically, Yunsup Lee, SiFive’s co-founder and CTO, hopes CHIPS will reboot chip design. “Semiconductor design starts have evaporated due to the skyrocketing cost of building a custom SoC. A healthy, vibrant semiconductor industry needs a significant number of design starts, and the CHIPS Alliance will fill this need,” Lee said.

Dr. Zvonimir Bandic, Western Digital’s senior director of next generation platforms architecture ahd  co-founder of RISC-V added: “The CHIPS Alliance will provide access to an open-source silicon solution that can democratize key memory and storage interfaces and enable revolutionary new data-centric architectures. It paves the way for a new generation of compute devices and intelligent accelerators that are close to the memory and can transform how data is moved, shared, and consumed across a wide range of applications.”

Once upon a time, and it wasn’t that long ago, innovation rules in chip design. With the fresh air of open source blowing through processors, perhaps we’ll see those days returned. With all the inherent security problems revealed in the dominant x86 processor world by the Meltdown and Spectre bugs, it’s well past time for new ideas.

[This article stolen from: https://www.zdnet.com/article/open-source-advances-deeper-into-hardware-the-chips-alliance-project/]