September 12, 2017 | Roosevelt Hotel, New York, NY
Next to Grand Central Station on East 45th St and Madison Avenue, Midtown, New York

Press Coverage

Big Data Engenders New Opportunities and Challenges on Wall Street
Cisco Hits ‘Warp’ Speed, Replicates Feeds in 50 Nanoseconds (Securities Techonology Monitor)

Big Data Engenders New Opportunities and Challenges on Wall Street

From: HPC Wire
September 27, 2012
Nicole Hemsoth

One of technology’s most pervasive buzzwords echoed in the ears of attendees at this year’s one-day HPC on Wall Street conference in New York City, as panel after panel addressed the challenges and opportunities that big data presents. From the opening remarks regarding Wall Street’s traditional concern of low latency, delivered by Cisco CTO Paul Perez, to the multiple open-ended discussions that took place in concurrent panels, the “big data” problem was a much-discussed topic.

For this industry, however, the concerns around what the overall technology ecosystem is touting as big data are quite different. The exploding volume of data that other industries are dealing with is compounded in the financial space by regulations mandating massive, long-term storage.

But the industry itself is finding value in the ability to tap those datasets in both real-time and historical context. What this means is that Wall Street is looking for snappy new ways to keep the meaningful data at the fore, while maintaining a monster archive of historical transactions and other data for more leisurely access and analysis.

During the course of a panel on the exploding demands for storage, analytics, risk management and ultra-low latency (not to mention the compute horsepower required), Emile Werr, VP and Head of Enterprise Architecture at NYSE Euronext described the system-wide challenges of massive, swift data across their HPC infrastructure. He noted that, for them, the challenges went far beyond the “three Vs” of big data: volume, variety and velocity. Their entire approach and methodologies had to shift.

The volume and complexity challenges were keenly felt in the context of the volatility of changing systems, new markets, and even new businesses his firm is exploring. Note that NYSE Technologies is the spin-out company from the exchange of the same name, and offers financial services that encompasses an increasingly large buffet of software and services, from custom middleware packages to hosted exchange analysis.

They have had to keep pace with an evolving exchange market for their customers, necessitating new approaches to their system environments on both the hardware and software sides. According to him, these tweaks and new services have allowed them to expand their traditional market business significantly.

Werr, who proudly notes that he’s the “big data guy” at NYSE, says that one thing that isn’t obvious in terms of their requirements is that the data that is fed into their systems is not user-friendly and certainly doesn’t come read-made for BI platforms. This means there is a whole, often invisible layer of complex data enrichment that is required.

But when you’re talking about billions of transactions per day, building systems that can take this unfriendly data and turn it into regulation-friendly, analysis-ready information is a key, ongoing struggle. Still, they think they may have solved some pieces of that system-wide puzzle and they’re marketing their architecture as a big data, HPC problem solver for this industry.

As mentioned earlier, another aspect of NYSE’s “macro data architecture strategy” that Werr defines is the regulatory-plus-storage problem. “We are obligated to maintain data for seven years,” he said, not without some exasperation. “There’s not one system out there that could actually store that data and have it online. Besides, it wouldn’t be practical. It’s old, old data, it’s just used for regulatory needs and then maybe trending over time details.”

But if the big data hype that insists all bytes are a potential goldmine rings with any validity, NYSE Euronext has a solution that could lend some credence to that ideal. The company has developed a clever system whereupon data is scattered across distributed resources in such a way that makes it possible to provision it on the fly. Using an on-demand approach they’ve refined, the system can serve an array of applications, everything from an historical audit to an analyst’s real-time query.

NYSE Technologies is commercializing its reported success with its inventive macro data architecture, which Werr says has been rolling along nicely in production for four years. While skipping on the specifics, he noted that the system works in harmony with messaging systems and feed handlers designed to capture certain transactions with keen latency.

Those files are generated in small mini-batches and then fired off to the firm’s “transformation-archive farm” that offloads a lot of the ETL processing across a commodity cluster. The data then moves into the enrichment phase where relational models can be constructed and dropped into distributed storage for the rapid, on-demand access capabilities he hinted at earlier. At the prettier end of the process is a services layer that allows for rapid provisioning and access for all applications as well as APIs for systems and schedulers, not to mention a more seamless end-result for that data to be analyzed for any other business purpose.

A well-oiled machine, no? Werr says that it took a lot of determination to climb out of their old paradigm of being a big database shop with the standard Oracle, Sybase, etc. tools. At the heart of that shift is the need for ever-faster ingestion of data. They’re at the point now where they can load around 20 terabytes per hour into their federated server farm. Since they have a short window of genuine production data, they’re able to then quickly provision that data into sandboxes to allow for more refined operation on specific subsets of that data, or use narrowly defined tools and integration approaches.

Whether or not we want to think abstractly about this big data craze as a mere concept or hype-bubble, the fact remains that the vendors on every conference panel throughout the day seemed to find some element of value in this topic. By presenting the opportunities and challenges of all the hardware and software this technology touches, attendees were left with the impression that the financial industry is in for some major retooling.


Cisco Hits ‘Warp’ Speed, Replicates Feeds in 50 Nanoseconds

From: Securities Techonology Monitor
September 19, 2012
Tom Steinert-Threlkeld

Cisco Systems tried to make a leap to the front of the pack in supplying high-speed switches to trading firms and exchanges, combining thrusts in five areas of technology to push speeds for certain applications under 100 billionths of a second and provide analytics along the way.

The new series of switches, its Series 3548 model line, operate in normal mode at 250 nanoseconds, for taking in, processing and forwarding trading information. In a “warp mode,’’ the system cuts the time to 190 nanoseconds, according to chief technology officer Paul Perez. And when a “warp span” is applied to the network involved, specialized applications such as replicating a feed of market data can be cut to 50 nanoseconds.

The company focused on bringing new technology and products to bear in lowering latency in processing data, handling ‘microbursts’ of orders, adding high-performance features such as translating data for fast transmission on networks, programmability of the system itself and one-nanosecond time stamping of each packet of data that comes into a switch, according to a presentation by Perez at the High Performance Computing on Wall Street conference in the Roosevelt Hotel in New York Wednesday.

Cisco says its switch can replicate data feeds in 50 billionths of a second.

Such combinations, Cisco hopes, play to its strength in providing networking gear. "The network is going to be the center of almost all innovation," chief executive John Chambers said, in a televised introduction to what it calls its AlgoBoost technologies.

"Your job is to capture alpha,’’ said Perez to an audience of software, hardware and systems developers for trading firms. “My job is to capture innovation. And I believe we are now innovating inside the rate of innovation of our competitors.''

The Warp Span technology allows data to bypass the heart of networks, a process Perez called ‘’cardiac bypass,’’ and find an available port on a server or switch to which the data can be sent directly.

The switch series also includes a “hitless’’ process for translating network addresses, so algorithmically driven trades can be sent to any venue without a delay.

Custom chips Cisco has designed also allow firms to analyze how a switch is performing while in production to help trading firms adjust and discover prices faster, speed or change order flow and manage regulatory requirements.

Some analytical capabilities are also encoded into the chips. Snapshots of packets in the switch’s memory buffer can create histograms that show, for instance, what is happening in a microburst, by the millisecond. Trading firms then can adjust their infrastructure to react to future microbursts, more adroitly.

Perez said 10 trading firms and exchanges are going into test mode with the 3548 series and AlgoBoost echnologies. The switches should be in use by the end of the year.

Cisco expects “a fair amount of demand hereon,” Perez said.

A competitor, Arista Networks, is not standing still, though. Also overnight, the firm, led by Sun Microsystems co-founder Andy Bechtelsheim, said a new 7150 series switches can forward instructions between ports inside 350 nanoseconds.

Traders can also “eliminate 100s of microseconds of forwarding delay” by using a capability that translates instructions to run at what is called “wire speed,’’ meaning data needs no further software assistance and can run at the maximum speed the hardware circuit can allow.

But Cisco said it will not stand still.

"This is not an end. This is simply a next step,'' Perez said.