O n June 14, 2012, Associate Professor Andrei Gritsan, postdoctoral fellow Sara Bolognesi, and graduate student Andrew Whitbeck—all from Johns Hopkins—gathered with several colleagues for a secret night-time meeting in a conference room at the CERN laboratory in Geneva, Switzerland. The occasion was their first look at a statistical analysis of data from proton collisions in the Large Hadron Collider (LHC), the world’s largest particle accelerator. So important were the data that the researchers had analyzed them without actually looking at the results, to avoid any chance of biasing their conclusions.
At this point, the researchers knew a graph with a bump in it would indicate strong evidence the collider had produced the elusive Higgs boson. The Higgs, first proposed in the 1960s, was needed to fill in the largest gap in the Standard Model, the leading theory in fundamental particle physics. Its existence would signify the existence of a field that permeates all space and imbues the matter in the universe with mass.
A flat graph, by contrast, would mean…nobody was quite sure what.
Gritsan, the leader of the team working at CERN, projected the graph on a screen, and the scientists knew their long wait was over.
“It all changed in an instant,” says Gritsan. “It was an emotional moment, and it left no doubt that we had something big.”
On the strength of the two bumps (another one revealed in another channel) and similar evidence from another research group, CERN’s leaders called a press conference for July 4, 2012. At the conference, they announced to the world that they had found a new particle with a mass of around 125 billion electronvolts, or GeVs (the electronvolt is particle physicists’ preferred way to express mass; 1 eV indicates a mass of about 1.8 10-33 grams). Whitbeck was delegated to sit near the front of the room, armed with a battery of PowerPoint slides to explain the scientists’ analysis techniques, should any reporter ask. (None did.)
Left unanswered at the conference, however, was whether the new particle was actually “the” Higgs boson. At the time, LHC researchers were even reluctant to call the particle “a” Higgs boson, preferring instead the cautious “Higgs-like particle.” Now, a year and a half later, LHC physicists are much more comfortable saying they have found “a” Higgs, if not yet “the” Higgs. And much of that progress has been due to efforts of the Gritsan lab at Johns Hopkins.
Gritsan and his team are part of the more than 2,000-person collaboration that manages the Compact Muon Solenoid, or CMS, one of two enormous detectors nestled into the LHC’s 17-mile-long tunnel. Using arrays of sensors similar to those in a digital camera, the detector takes precision snapshots of the debris spewed out when near-light-speed protons collide and turn their energy into new matter. The CMS was already under construction when Gritsan joined the collaboration in 2005, but he quickly made his mark on the team, finding ways to precisely align the instrument’s sensors so researchers could compute the exact paths particles would take through the detector.
Gritsan simultaneously developed methods to extract useful results from the masses of data the CMS would soon be collecting. The CMS, along with its sister detector ATLAS, was designed to capture not the Higgs boson itself but its decay products, because theorists had shown that the Higgs boson, if it exists, will decay into other particles before it can leave any physical record of its presence. Gritsan and his team looked for signatures of one of the Higgs boson’s possible decay “channels,” known as “H→ZZ.” In this channel, the Higgs decays to two Z bosons; these in turn quickly decay to four leptons—a class of particle that includes electrons and their heavier cousins, muons. As the leptons speed away at close to the speed of light, they leave traces in the layers of sensors that make up the CMS. The sensors then dump their data to a worldwide network of computers.
That’s when the analysis started. The amount of data to be sifted through was truly daunting—20 billion collisions recorded, each yielding roughly 1 megabyte—and the researchers’ only hope was to write clever computer programs to extract signals of the particles they sought from this vast, noisy background. Gritsan, Bolognesi, Whitbeck, and several former students in the lab spent years developing a sophisticated analysis method called the Matrix Element Likelihood Approach, or MELA (“mela” means “gathering” in Sanskrit and “apple” in Italian; Bolognesi says she and her colleagues hope their technique is as successful as Apple Computers). MELA, which extracts information on the angles at which decay particles fly away from a collision, is instrumental in amplifying the signal to the so-called “five-sigma” certainty level that gave LHC leaders the confidence to call the famous 2012 press conference. In early 2013, MELA results gave physicists the confidence to call it a Higgs boson, and in October, the two researchers (Peter Higgs and Francois Englert) who first predicted its existence received the Nobel Prize.
With such blockbuster success, LHC scientists now find themselves in a peculiar position. On the one hand, they have made the biggest discovery in particle physics in decades. On the other hand, it is a discovery many people expected; and not finding the Higgs boson would have in some ways been more tantalizing. Meanwhile, the LHC is shut down until 2015 for repairs and upgrades, so any new revolutionary discoveries are at best several years down the road.
But Gritsan and his Hopkins colleagues aren’t just biding time until the LHC comes back to life. For one thing, they still have piles of data to analyze as they seek to further understand the new particle.
The Higgs boson mass was found to be consistent with the predictions of the Standard Model, but it is inconsistent with expected quantum effects—so-called “loop corrections”— that can increase the mass enormously, unless there is incredible fine-tuning of the theory to keep the mass relatively small. This suggests that further surprises may be lurking. Moreover, the LHC has thus far operated only at energies up to around 8 TeV, just over half of what it was designed for, meaning researchers have another large energy space to explore once the collider fires up again. New particles could be lurking there, as could clues to the nature of dark matter, dark energy, and the unexplained dominance of matter over antimatter in the universe. In short, says Gritsan, “this is just the beginning; we have found a completely new state of matter-energy, and we do not even know where it will take us.”
Throughout history, astronomers have struggled to coax even the smallest bits of information about our universe from the night sky. Their task is rapidly becoming easier, however, thanks to powerful modern telescopes and computers that are expanding the astronomical data stream from a trickle to a torrent. But this shift, while in many ways a researcher’s dream come true, also creates new challenges.
“No scientist has ever refused to take more data if they could,” says astrophysicist Alex Szalay, the Alumni Centennial Professor of Astronomy. “The question is how do we make sense of it all? And that’s becoming hard—very, very hard.”
In the past two decades, Szalay and his Johns Hopkins colleagues have become leaders in confronting these challenges. They have developed techniques to collect, store, and analyze vast quantities of astronomical data. Now they are applying what they’ve learned to fields as far-flung as genomics, linguistics, engineering, neuroscience, and ecology. “Big data” is fundamentally changing how science is done, says Szalay. “We are really undergoing a major scientific revolution right now.”
Szalay became involved in big data a little over two decades ago, when Johns Hopkins joined the Sloan Digital Sky Survey (SDSS), an ambitious effort to map around a quarter of the night sky in unprecedented detail. At the time, SDSS researchers were concerned about how to handle the 10 terabytes of data they planned to collect. Szalay started building databases to organize the information and allow scientists to analyze it, even though he admits “[he] didn’t know anything about databases” at the time.
Collaborating with legendary Microsoft computer scientist Jim Gray, Szalay created a new kind of system with the tools needed for analysis sitting on the same server as the data. SDSS then made both the data and the tools available to the astronomy community. Suddenly, scientific discovery was no longer limited to a small group of scientists who collected or had access to a data set—now anybody with an internet connection could get involved. As a result, SDSS has become one of the most productive scientific projects in history: it has already generated more than 5,000 publications from groups around the world, and researchers continue to mine both new and old SDSS data for additional discoveries. “The result is a lot of people are coming in with very clever ideas that even the collaboration could never think of,” says Szalay.
Some of the most innovative ideas have come from Assistant Professor Brice Ménard, an astrophysicist who joined the Johns Hopkins faculty in 2010. Using sophisticated statistical techniques, Ménard was able to map intergalactic dust, which is so faint that astronomers had not previously been able to detect it. Ménard showed that intergalactic space held far more of this dust than anyone had thought. He has more recently extended his techniques to map intergalactic gas and dark matter. Szalay says Ménard is an “example of the next generation of scientists who are playing these instruments like a master.”
Of course, the amount of data that seemed big when SDSS began now seems modest. Ten terabytes can fit onto a few palm-sized hard drives, and even the 400 terabytes the survey ended up collecting are hardly tremendous by today’s standards. But Johns Hopkins is also a member of the Large Synoptic Survey Telescope (LSST), a next-generation instrument that will begin operating in 2022. LSST will photograph the sky every few nights, collecting as much data in one night as SDSS gathers in a year. With so much data available, says Ménard, “You are limited by your own imagination.”
“We are really undergoing a major scientific revolution right now.”
Faculty have also pushed big data science into new territory by creating an initiative that cuts across the traditional Johns Hopkins divisional boundaries. The Institute for Data Intensive Engineering and Science, or IDIES, is set to transform a host of disciplines. In ecology, for example, scientists from the university’s Department of Earth and Planetary Sciences are installing a global network of wireless sensors to collect data on soil temperature, moisture, and carbon flux. Because soil microbes release far more carbon dioxide than all human activities, the data from this network could answer questions of fundamental importance to understanding and predicting future climate change. Szalay, the institute’s director, says IDIES was the first interdisciplinary big data center of its type when it launched in 2009, and has since inspired similar efforts at other universities.
Johns Hopkins’ move into big data has positioned the university as a computing powerhouse. In 2008, Szalay and his colleagues launched Graywulf, a server cluster named in honor of Jim Gray, who had been lost at sea the previous year. Thanks to the team’s focus on increasing data throughput rates, Graywulf won a major competition for high-speed data processing, beating out entries from many places better known for computer science. Johns Hopkins faculty then bested themselves with Data-Scope, which came online this summer and reads data 30 times faster than Graywulf, making it the fastest data-processing system at any university in the world. Data-Scope, which is available to selected research groups from Johns Hopkins and other campuses, lives in the Bloomberg Center for Physics and Astronomy alongside the Homewood High-Performance Cluster, which serves researchers from the Krieger School of Arts and Sciences and the Whiting School of Engineering.
The future may be hard to predict, but one forecast both Szalay and Ménard are willing to make is that big data science is only going to get bigger. And they strongly recommend that any up-and-coming scientist, regardless of field, pick up some computational skills early on. “My message to young people is that this is a different way of thinking about your science,” says Ménard. “I think it’s just at the beginning. It’s exciting. The possibilities are infinite.”
Theoretical physicist Oleg Tchernyshyov and his colleagues in Johns Hopkins’ Institute for Quantum Matter study various kinds of magnetic materials from ferromagnets to spin ice and spin liquids. Their recent work is leading to new theories about the intricate and challenging properties of magnetism.
Spin is a key aspect of magnetism because magnetic fields arise from the spinning of electrons, creating electric currents, which in turn generate a magnetic field. A spin generates a magnetic moment pointing in a specific direction, corresponding to the north pole-south pole axis of an ordinary bar magnet.
Tchernyshyov and his collaborators study how spins behave in various magnetic materials, focusing on defects, or textures—spins that don’t conform to the orderly arrangement of spins in the material as a whole. In a ferromagnet for instance, where all the spins are neatly aligned to point in the same direction, disturbing one of the spins can cause it to flip, from pointing up, say, to pointing down. That flipped spin will then return to its original direction by inducing its neighbor to flip, which will then induce another neighbor to flip and so on. As the process continues, an oppositely flipped spin will appear to travel through the material as though it were a particle, or “quasiparticle,” known as a magnon.
Over the decades, physicists have developed a thorough understanding of dynamic processes such as motion of magnons in systems where spins exist in an orderly arrangement. But things can be more complicated in magnetic materials where the spins don’t line up at all. These materials are spin liquids, by analogy with ordinary liquids, in which molecules have no specific arrangement—unlike the orderly arrangement of atoms in solid crystals.
In classical physics, complete information about the state of a system of particles exists when the state of each and every particle is individually specified. In the classical picture, the spins of two electrons can be in one of four states: both pointing up, both pointing down, one pointing up and the other down, and vice versa. Quantum mechanics allows for infinitely many states besides those, including a state in which two spins have opposite orientations, even though the orientations of the individual spins are not known. Measuring them would yield random results that are nonetheless perfectly antipodal (one up, the other down, or vice versa). “In a way, the spin of a particle in this entangled state is both up and down at the same time,” says Tchernyshyov. “Physicists and computer scientists hope to harness the power of quantum entanglement to parallelize computation.”
As a byproduct of this entanglement, quasiparticles in quantum spin liquids can carry fractional amounts of spin (half a unit of angular momentum), whereas magnons in an ordered magnet carry integer spin. But the existence of such materials in real life hasn’t been easy to establish.
One proposal for substances with spin liquid status are materials called kagome antiferromagnets. Tchernyshyov and his colleagues have calculated how kagome antiferromagnets would behave in ways that can be tested via neutron scattering. A recent experiment by a group at MIT, in collaboration with Collin Broholm—the Gerhard H. Dieke Professor at Hopkins—used the Tchernyshyov group’s predictions to provide evidence that the material actually is a quantum spin liquid.
The theories of Tchernyshyov and his colleagues are being used by experimental physicists to understand quantum effects in other magnetic materials. For example, the Tchernyshyov group has predicted behaviors that would be observed in materials called quantum spin ices. Instead of just a single disturbance moving around, a quantum spin ice can contain a string of misaligned spins. In a paper published last year in Physical Review Letters, Tchernyshyov and graduate student Yuan Wan describe such strings in a compound composed of the elements Yb2Ti2O7.
Ordinarily, spins, just like ordinary magnets, are dipoles, with both a north and south pole. But in quantum spin ices, a quasiparticle can have a north pole without a south pole, and vice versa. These monopoles would be found at the ends of strings, and so could be thought of as “monopoles on a leash,” Tchernyshyov says. (They are similar in concept but technically different from magnetic monopoles discussed in cosmology, which are predicted to exist but not yet observed.) Current experimental work by colleague Peter Armitage, associate professor in the department, has provided hints that the monopoles on a leash described theoretically by Tchernyshyov and Wan may actually exist.
Much of Tchernyshyov’s work is of mostly basic research interest, but understanding how defects move around in magnetic materials does have potential uses. Manipulating defects in magnetic nano-wires could be used to store information and even perform computations, for instance.
Efforts along those lines are underway in studies by experimental physicist Stuart Parkin and his colleagues at IBM’s Almaden research laboratory in California. Parkin’s group recently reported success in guiding the motion of defects in a nanowire network—a sort of artificial spin ice—in work relying on theoretical descriptions of such systems published by Tchernyshyov and graduate student Gia-Wei Chern in 2005.
“When other researchers and basic scientists are using your ideas, it’s the best thing a theorist can hope for,” says Tchernyshyov.