A successful Olympics with an innovative and cost-effective workflow

Home/News, Opinion/A successful Olympics with an innovative and cost-effective workflow

A successful Olympics with an innovative and cost-effective workflow

I think it’s safe to say that this Olympics has been a success; London 2012 really did deliver. On top of Team GB’s great performance, the BBC also really delivered the content to those not fortunate enough to get tickets.

The BBC had the power to draw the nation together and kicked off with the opening ceremony, pulling in 27 million viewers, the highest ratings for any TV broadcast since well before the turn of the century. “Super Saturday” was the middle Saturday after a great opening week of coverage. A Saturday to remember, we were able to see Jessica Ennis in the Heptathlon win gold, then Greg Rutherford competing in the High Jump to secure gold and, if that wasn’t enough, we witnessed Mo Farah win gold in a tough 10000m. 17 million shared the experience courtesy of the BBC.

The 2012 Olympics was literally at your finger tips.  Many of us followed the Games on digital platforms and the iPlayer online was ever popular. During the games I visited various customers and most not if all had the Games streaming to their desktops, with some offices having multiple steams running in the one room!  On top of this you could keep up to date on mobile devices using apps. The BBC had 24 “Red button” TV channels running throughout the Games which were available online or via the red button on satellite platforms. They produced 2,500 hours of live footage in high definition, and as a result,100,000 watched every red button channel at some point. Online usage was massive in the first week with 34.7 million unique web viewings. The Olympics really did show what the future holds in the way of digital media and the underlying technologies that allow it to be produced.

It got me thinking. How do other countries deliver the content back to their native lands? Is it really that much easier when you’re the host country to produce such high end programming across multiple platforms?

One Olympic workflow that got my attention was Televisa’s. Based in Mexico City, Televisa is the world’s largest Spanish language broadcaster and for the London 2012 Olympics they implemented an innovative remote workflow using Sienna from UK company, Gallery.

For the Beijing’s Olympics in 2008 Televisa had created two large playout studios in China and relocated 250 staff for the period of the games to operate the facility. Its approach for the London Olympics was slightly different, not because the Beijing workflow wasn’t suitable, more that they were keen to evolve the workflow and take advantage of new technologies. As a result, they were able to send less than half the number of people to London than they did to Beijing; the majority of these being in front of the cameras. Also the equipment sent to London was considerably less, with the main infrastructure remaining in Mexico City.

The challenge was to keep the two sites, London & Mexico City, in sync, allowing the editors in Mexico City to work on the content as it was created in London.

It is already common for broadcasters to send video feeds via dedicated satellites, enabling them to downlink the video feeds into their facility back in their home cities. However, the cost of multiple, long-duration satellite connections for HD video is very expensive. Also, it’s not always practical to send rushes across dedicated satellites. For these reasons, it’s been necessary in the past to send editors to the event and then send the compiled content back to the home facility via the satellite feed. Overall, that’s a very costly process.

Gallery’s & Televisa approach was to take advantage of a remote workflow. The content was captured in London but the system driven from Mexico City. With the editors located in Mexico City and the high resolution media located in London, an elegant and streamlined workflow was needed to share the media.

The solution was to transfer the proxies across the Wide Area Network (Internet), and store them in Mexico City. The editors then worked with local data where the performance is fine.  The next challenge is how to conform the proxy edits back into the London-based high resolution media.

Using the Wide Area Network brought its own challenges, the main one being latency. As an example, the typical latency on a Local Area Network inside a broadcast facility might be less than 1 millisecond. However, once you start working over any sort of distance this very quickly rises and might go as high as 300 or 400 milliseconds for a trip from Europe to some parts of Asia, and up to 200 milliseconds from Europe to the US. Any major increase in latency can transform TCP performance on a network from fast to painfully slow. You might see your actual data transfer performance drop from 100mBits/second down to less than 1mBit/second over these sorts of latency – what’s more, simply adding more bandwidth doesn’t help much since the latency dictates the ‘Bandwidth Delay Product’ and becomes much more important than the actual connection speed.  Again, you could use a dedicated point to point network but as with the satellite costs, these are high.

As a result, Gallery developed a system to address these challenges using Sienna MediaVortex, part of a split-site Sienna Media infrastructure. Sienna introduced the concept of a Distributed Media Cloud where media is not stored centrally as you might normally expect with a cloud infrastructure, but instead resides across multiple linked locations where each has seamless access to all the rest, giving the impression of centralised media.

This enables assets to be conjoined. The concept of a conjoined asset – where multiple sites have their own local copy of the media which remains conjoined in the MAM layer – enables the media logging information at one site to be automatically propagated to all the other sites sharing this asset.

For the London Olympics the Sienna MediaVortex workflow was used to link the large Sienna system at Televisa in Mexico City to a smaller Sienna system in London with a small crew. The smaller London-based Sienna setup provided 20 channels of Ingest and 4 channels of playout, along with more than 100TBs of storage and a Media Asset Management system linked to the infrastructure in Mexico. A 100mBit data connection, carried proxy media propagated from London to Mexico along with control interfaces for the London systems.

Operators in Mexico remotely controlled the ingest feeds in London, while loggers in Mexico worked with the proxy video which flowed across the MediaVortex connection to log each event, as it happened. Each time a log entry was made in Mexico, the conjoined asset in London inherited the same logging information, and vice versa. Editing can take place on either end using all logging data. Editors in Mexico worked with proxy media conjoined across the WAN connection, using the logged markers to speed media selection and searching. They used a web-based cuts assembly tool that works in real time, even while the media is still being ingested in London!

Having completed a remote proxy edit in Mexico, editors had a one-click workflow to instruct the London Sienna system to render a finished high resolution package. This was then loaded into a baseband playout channel using a remote connection and then played out as part of a show, or across the line down a live satellite connection. Alternatively the finished high resolution asset was transferred via MediaVortex to Mexico.

Besides being a platform for outstanding sporting achievements, the 2012 London Olympics really did showcase some exciting new technologies.

By | 2016-10-29T08:55:35+00:00 September 5th, 2012|Categories: News, Opinion|Tags: , , , , , , , , , |0 Comments

Leave A Comment