BroadMotion Decodes Real-Time JPEG2000 on NVIDIA GPUs

VANCOUVER – September 14, 2010 – BroadMotion today announced support of its JPEG2000 decoding technology for the NVIDIA Fermi-based Quadro® and Tesla™ professional graphics processing units (GPUs). By also leveraging the NVIDIA CUDA™ parallel computing architecture, BroadMotion’s technology is able to decode digital cinema compliant 2K JPEG2000 streams at more than 24 frames per second (fps), achieving real-time results by utilizing the latest NVIDIA professional GPUs. The decoding technology is currently available to select customers and will be widely available in the first quarter of 2011.

“We have had our sights set on GPU technology for several years,” said Jeff Brooks, president of BroadMotion. “By demonstrating our ability to decode real-time 2K streams, we have proven that digital cinema technology can be achieved in a cost-effective GPU configuration. In future releases, we are targeting all decoding resolutions and frame rates in the digital cinema specification, namely 2K48fps, 4K24fps and 4K48fps on a wide variety of NVIDIA GPUs; thus enabling cost-effective digital cinema solutions on standard computing parts.”

“BroadMotion has made a strategic, smart choice for its current and future customers by utilizing CUDA technology and our GPUs to accelerate its latest decoding solutions,” said Andrew Cresci, general manager, vertical market solutions, NVIDIA. “JPEG2000 decoding technology, especially at digital cinema resolutions, requires significant processing power, a perfect fit for the massively parallel processing power that NVIDIA delivers.”

JPEG2000 is the next-generation International Organization for Standardization (ISO) still image and video compression standard. In 2005, it was adopted by Digital Cinema Initiatives (DCI) as the format for the storage and distribution of motion pictures. JPEG2000 addresses several weaknesses inherent in existing video formats by providing improved visual fidelity at high compression rates, low latency, increased error resiliency in wireless network environments and multi-resolution decoding from a single source file. JPEG2000 is currently being deployed in digital cinema, HD broadcast production, wireless transport of in-home entertainment and video surveillance applications.

About BroadMotion
BroadMotion Inc. develops and licenses high-performance JPEG2000 encoders and decoders for the digital cinema, high definition television (HDTV) broadcast, video surveillance and consumer electronics markets. For more information, please visit www.broadmotion.com

All product or service names mentioned herein are the trademarks of their respective owners

Company Contact: Jeff Brooks, jeff.brooks@broadmotion.com, (604) 725-5333

Posted in Digital Cinema, JPEG2000 | Comments Off

Compression Basics

I had contemplated authoring an article about compression, but came across an interesting series of articles from DSP DesignLine. Here is an excellent background tutorial on data compression written by Steven W. Smith, Ph.D.:

Data Compression: Part 1 (Huffman and RLE explored)
Data Compression: Part 2 (Delta encoding and GIF explored)
Data Compression: Part 3 (JPEG and MPEG explored)

Video Compression (a much older article written by Amit Shoham, but good nonetheless)

Posted in Standards, Video Engineering | Leave a comment

Throughput vs. Bitrate

Here’s is a very common query that is made relative to video compression, usually with embedded implementations. There is a vernacular of terms that are often improperly stated or misunderstood and I’d like to help alleviate some of that confusion. As a preface, our own web site is not overly clear on some of these points as well…but we are currently working to change that.

The term bitrate is pretty well established and basically means the number of bits of data that can be transported over some type of networked medium. This medium can be a data bus (e.g. PCI-X), a hardwired connection/protocol (e.g. HDMI), a protocol over a hardwired connection (e.g. HD-SDI over coax, GigE) or a wireless connection (e.g. 802.11, UWB).

Digital video, whether compressed or uncompressed, can be transported over all of these mediums. But in the case of a compressed video form (e.g. JPEG2000, H.264 or MPEG-2) there are two primary processing activities for which the term “throughput” is often applied, that being encoding and decoding (hence the term CODEC). In other words, the video content needs to be compressed (encoded), transported over the networked medium, and then decompressed (decoded) at the target destination. “Throughput” is often applied to mean how fast that encoding or decoding process can occur.

For example, if you use a digital camcorder the first instance of compression is often happening on the camcorder into the DV format. Then it might be transported via Firewire or USB to your computer. Your computer will need to decode it to display the video. On the camcorder there is an embedded processor chip that will compress the video content in “real-time” to a storage medium. The only function of that chip is to encode video, so its “throughput” is fast and it can handle a large amount of content…for example HD content.

In this example, you are not transporting the video in real-time, so the restrictions of the wired interface may not be all that important. It may be annoying to wait while a video downloads from your camera, but Firewire and USB can usually push the content across at or close to real-time. Once its on your computer, there are a number of factors by which the “throughput” moniker could be applied. Some of them will vary depending on the state of various activities on your computer. How fast your hard drive can extract and deliver data can be termed “throughput”. The bus speed of your computer can influence “throughput” as well. And your computer’s processor is probably the biggest determining factor of “throughput”.

I have an Intel dual-core processor on a MacBook Pro. Under certain circumstances I can play HD video content in real time. Other circumstances I cannot…in which case the decoder will begin to drop frames in order to keep pace. It’s important to understand what the various bottlenecks may be and how they could affect your “throughput”. As a consumer, this may affect the quality of your viewing experience. If you are an engineer designing a video capture system, then you need to be able to match the “throughput” of your processes to the desired video specifications of your system.

So let’s use a real-world example. Let’s say your are designing a system that will encode (compress) HD720p video, transport it over an 802.11n wireless network, then decodes the video (decompress) and plays it on a video monitor. The raw uncompressed data contained in HD 720p video feed will result in a bitrate of at least 633 Mbs (this is without audio, video only). This assumes a resolution 1280×720, an RGB color space at 8 bits per plane for 24-bit color, and 30 frames per second. The equation looks like this 1280*720*3*8*30 = 663,553,000 total bits. Divide by 1024 to covert to Kbs and again by 1024 to get Mbs.

To encode that content in real-time, you would need an encoder and processor capable of processing at least 633 Mbs of “throughput”. For a variety of reasons, this benchmark is often stated in terms of MB/s (megabytes instead of megabits) and sometimes in terms of mega-samples per second, which is more of an embedded hardware benchmark. If, for example, a vendor states throughput of 100 MB/s this usually means they are capable of handling up to 800 Mbs (100 MB/s * 8 bits/byte) of throughput. In this circumstance, they could easily handle 633 Mbs of data and still have plenty of headroom for things like audio.

The bitrate that is transported via the networked medium is really of function of compression. Let’s say you have an 802.11n wireless connection. Theoretically it can move data at 248 Mbs. However the practical data rate is closer to 75 Mbs. If you have data that, in its raw form, is currently 633 Mbs and you need to move it over a 75 Mbs transport, then you would need to compress at a ratio of 8.44:1 (633 Mbs / 75 Mbs). That is a very feasible solution and the visual result should prove to be quite good. In my next post, I’ll dig further into these concepts and how they apply to JPEG2000 specifically.

Posted in JPEG2000, Standards, Video Engineering | Leave a comment

JPEG2000 Portable Field Recorder

Here’s a pretty cool product that we just came across. Codex Digital just announced a JPEG2000 based portable storage solution, the size of a toaster. Intended for use in professional broadcast and cinematography, the product is weather and shock resistant to meet the demands of rigged field environments.

From their press release:

The Codex Portable is the first portable disk-recorder to handle all formats up to 4K at cinema-quality, and the first to handle both video and data-mode cameras.

Flexible I/O configurations mean the Codex Portable can record from virtually every digital camera available today – including all HD cameras in video mode, plus data-mode from cameras such as the ARRI D-20 TM and DALSA’s Origin®. It can also record Red Digital Cinema’s RED ONE™ camera in 4K data-mode, when it becomes available.

Recording is made to hot-swappable, shock-mounted RAID disk packs that can hold up to three hours of continuous recording at the system’s highest quality – the first portable recorder (disk or tape) to offer such capability and capacity. The compression method used is JPEG2000, a wavelet-based industry standard, which is visually indistinguishable from the original and is comparable to the highest-quality mode of HDCAM-SR tape.

Product availability is “late 2007″.

More from the company’s website: www.codexdigital.com

Posted in Broadcast, Digital Cinema, HDTV, JPEG2000, Products | Leave a comment

To JPEG2000 or JPEG 2000? That is the question…

William Shakespeare once wrote, “What’s in a name? That which we call a rose by any other name would smell as sweet.” True enough in Shakespeare’s time but then again, William never had to deal with the ISO. We don’t have that luxury and are quickly going to have to answer the question, “To JPEG2000 or JPEG 2000; it should be…”

So what difference does it make? “You’re just ‘splitting hairs’ Jeff!” you say? Seeing as my biggest beef with JPEG2000 is the name itself, its important to get the naming convention right while we are still in the early stages of its growth. There is already confusion surrounding JPEG2000 so it doesn’t help when there’s a lack of consistency in the name itself.

Why all the worry? Well, I seeing an interesting naming divergence develop over the past seven years in JPEG2000 regarding whether or not to include a SPACE in the spelling. The market is roughly split 70/30, with 70% JPEG2000 and 30% JPEG 2000. Take note the space between JPEG and 2000 in the latter, that’s the issue at stake here.

To be clear, here is what I am basing my opinion on. I turned to the source Google and the results pan out as follows:

A search on the term “JPEG2000” yields 1,350,000 results. Search on “JPEG 2000” results in 807,000 hits, thus the 70/30 split I mention above. Actually its 68.75%/31.25% split but 70/30 is close and nice round numbers people find easier to digest. Be careful however, make a mistake and search for JPEG 2000 (without quotes) and you get a staggering 20,700,000 hits. Whoops. That’s a lot of links to wade through for just forgetting quotes.

A further analysis of the terms over at Google Trends shows that JPEG2000 (Blue) has always ranked higher than “JPEG 2000″ (Orange).

All this of course flies in the face of ISO naming convention, which is “JPEG 2000″. The official ISO second edition of JPEG 2000 Part 1 (ISO/IEC 15444-1:2004) has 25 references to “JPEG 2000” in the document and ZERO references to JPEG2000 (no surprise). So its clear, the ISO committee is using the “spaced” version whereas the majorities on the Internet are using the “condensed” version. It seems that the “official” version is not translating into widespread market adoption but rather market confusion.

Turning to “The JPEG committee home page” they actually have intermixed use. Really? That’s not in the ISO JPEG2000 spec! For example, on the home page itself, the have 2 uses of “JPEG 2000” and one of “JPEG2000″. On the JPEG2000 homepage (“the” JPEG2000 section within the site), they have two instances of JPEG2000, and ten of “JPEG 2000”. Actually, there should be three cases of JPEG2000, however the site’s authors have mistakenly erred in the title of David Taubman’s book (see below). Overall it is a 30/70 split, as opposed to a single use case! And if the JPEG committee homepage cannot decide then woe to the rest of us.

Looking elsewhere online, on the JPEG2000 page on Wikipedia, there is complete breakdown of continuity with a high degree of intermixing of terms. There are 25 instances of JPEG2000 and 71 instances of “JPEG 2000″ referenced (mentions in hypertext links not included in totals)*. Wikipedia entries change so often that these stats could will differ over time, considering ANYONE can edit the page. Interestingly enough, the ratio on Wikipedia is almost the inverse of what is found on the Internet as a whole. Main thing to note is the very multiple-author nature of Wikipedia exposes the confusion in the marketplace surrounding the name. “Is it JPEG2000 or JPEG 2000?”, I ask all the Wikipedia authors!

*Note: these stats are as of June 5/07.

Moving along, let’s look at the literary works on JPEG2000 located on Amazon.com. When examining the “Bible” of the JPEG2000 industry “JPEG2000: Image Compression Fundamentals, Standards and Practice” by David Taubman & Michael Marcellin, the title says it all. The other main book published exclusively on the subject, “JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures” by Tinku Acharya and Ping-Sing Tsai, they too have chosen JPEG2000. A JPEG2000 search on Amazon yields 345 items, while “JPEG 2000” yields 430 hits. The “JPEG 2000” results however are intermixed with results for “JPEG-2000” (note the use of a dash; more on this below) so the true number is lower than 430. So searching on the respective terms gives roughly a 50/50 split for items for sale on Amazon. However, most importantly is that the two definitive works on the subject are expressed as “JPEG2000”.

Just for some additional variety, on the Jasper homepage, which is Part 5 of the standard or “reference software”, Mike Adams utilizes “JPEG 2000″ three times, “JPEG2000″ once and “JPEG-2000″ (“dash” version) a whopping 23 times. “JPEG-2000″? Yet another twist in an already confused naming convention, in my opinion.

So what does Google think of this third option? Searching “JPEG-2000″ on Google actually does not yield usable results as the search engine treats the “-” differently and won’t return results specifically on “JPEG-2000″. Instead it gives results with JPEG2000, JPEG 2000 and JPEG-2000 all mashed together.

And just when you think you have it all figured out, head on over to Yahoo and all the results are different still! “JPEG2000” at Yahoo yields 698,000 hits, “JPEG 2000” gets back 431,000 hits and “JPEG-2000” results in 408,000. Caution: once again the “JPEG-2000” dash version gives an unpredictable and a mash-up of results. Top line however, the split is 60/40 and aligns with Google’s higher number of hits towards “JPEG2000” vs. “JPEG 2000”.

As you can most likely tell from our website, we at BroadMotion have chosen to standardize on JPEG2000 for a number of reasons.

First and foremost, JPEG2000 is unique and won’t be confused in a search query with standard JPEG. Simply put, most articles written on JPEG were authored before JPEG2000’s inception. That being the case, a search for JPEG2000 will direct the user to the right content, not standard JPEG based content. There will be some content that doesn’t follow this rule of thumb but it is a minority of content in my experience.

Second, by using all one word, search engines don’t require quotes around the terms! So what? Remember from above, forget the quotes and just search on JPEG 2000 in Google yields 20,700,000 hits. That’s because it includes any page that has both JPEG and 2000 on the page but not necessarily next to each other. That’s a 20x increase in hits but not the right hits.

But the challenge is to get everyone in the industry on the same page, before two different usages become entrenched. It is already an uphill battle with the similarity in names between this technology and standard JPEG. Calling it “JPEG2000” and “JPEG 2000” and “JPEG-2000” does not help! Pick one and stick with it, lest the freedom of choice now will compound itself ten years from now in my opinion.

While the standard is still in its relative infancy, changes effected now will be easiest to make as opposed to 20 years from now. I would urge any reader of this article who has a connection to JPEG2000 in any way, to consider using JPEG2000 in their product literature, white papers, marketing collateral and in the printed word wherever it is found. “JPEG 2000” is one more eccentric anomaly in the naming convention of an otherwise great technology!

Luckily, in the end it seems that the marketplace, not the ISO, has decided where this debate is going to be headed. Both search engines show a clear bias on the part of content creators to utilize JPEG2000 over “JPEG 2000” in web pages. The leading literary works on the subject are clearly favoring JPEG2000. Companies such as BroadMotion and others, by and large are using JPEG2000 and hopefully the ISO will eventually follow suit.

Interesting enough, on Google when you search on JPEG 2000, (the quote-less version), Google returns the results list with this helpful suggestion: “Did you mean: jpeg2000”

Enough said, the great Google has chosen.

JPEG2000 it is.

Shakespeare would be proud.

Posted in JPEG2000, Standards | 2 Comments

JPEG2000 coming to you from Mars…

Turns out that NASA has recently released new digital images of Mars from their Planetary Data System (PDS) and they are utilizing JPEG2000 to encode the data.

From the HIRISE (High Resolution Imaging Science Experiment) blog:

“The solution for many NASA missions has been the development of the centralized Planetary Data System (PDS). The PDS is several things: a collection of websites, a search capability, an archive, a database, a learning tool, etc. The PDS Imaging Node is located at http://pds-imaging.jpl.nasa.gov/ and acts as “the curator of NASA’s primary digital image collections from past, present and future planetary missions.” These missions include Voyager, Galileo, Cassini, and many more. Now the Mars Reconnaissance Orbiter (MRO) has been added to the list, with the HiRISE team releasing our first several months of image data.

What we have released is an archive of the HiRISE Experiment Data Records (EDRs) and Reduced Data Records (RDRs). EDRs are in the *.IMG file format and represent individual CCD channels (remember, there are 14 CCDs in the HiRISE camera and two channels per CCD, for a total of 28 channels). These EDRs are cleaned up, calibrated, stitched together, and mapped to Mars’ geometry, resulting in the RDR products. RDRs are in the *.JP2 and *.LBL formats. JPEG2000 is the technology that enables us to offer our gigantic images to the scientific community and the public in a timely and efficient manner. An observation’s image data are in the *.JP2 file and its meta data are in the detached *.LBL files. To view these products, JPEG2000 compatible software is required (see our site for a list of offerings)”.

LINK TO HiRISE BLOG

Now you can see in real-time one of main the advantages of JPEG2000; image streaming. JPIP is the function within JPEG2000 that allows for the streaming the image data to the user as per their request(s) without the need to download the entire image. And when you’re dealing with images ~650MB each, only downloading the image data you’re interested in as opposed to all 650MB is a time saver indeed!

Try out the JPEG2000 viewer on the HiRise main site here and see for yourself if can find any little green men waiving back when the orbiter flew overhead! ;-)

Posted in JPEG2000, Standards | Leave a comment

NAB, Here We Come

Vegas SignThe BroadMotion team is yet again making another trip to Vegas. Is there a trade show anymore that isn’t held in Vegas? Not that this is a problem, but it would be great to see the inside of a convention center of another city some time!

BroadMotion will be one of the featured vendors in the Altera Booth C3347. We will have two demonstrations running, although only one at a time.

One will show a real-time encoding of SD content from a video camera. Yes, while our technology can encode content greater than SD, the demo/development environment ran into a few issues over the past month. The result is that we can only connect an analog video source to it. It’s still pretty cool and will give visitors an opportunity to see J2K in action….encoded on an Altera Cyclone II FPGA.

Our second demo will feature HD video content pre-encoded on an Altera Stratix 2S90 FPGA. As referenced in a past post, this content has been encoded such that multiple extractions can be demonstrated in real-time. This is also quite cool and should help raise the awareness around J2K’s capabilities.

Come by and say hello!

Posted in Trade Shows | Leave a comment

Encode Once, Display Anywhere

One of the more powerful features of JPEG2000, and probably one with the least amount of awareness, is the ability to extract multiple resolutions out of a single bitstream. From a video encoding viewpoint, this is a very instrumental feature when you consider the potential change in content distribution models this can enable.

For the sake of time I need to keep this post relatively short, so I will skip the technical detail. But I will revisit this post with additional content once we get past the upcoming NAB show.

The JPEG2000 format enables you to encode content layers that can be extracted to various device geometries or varying image dimensions. This would allow a content owner to create a single source file by which to deliver content from everything to a digital cinema projector down to a cell phone. The device would only extract the content it requires, such that the data rate for a digital cinema projector would be large while the cell phone would be rather small. But the real magic is that it is able to pull this content from a single source bitstream.

If you were the owner of this content, this would considerably reduce the cost and security surrounding multiple masters and/or transcoding for various devices. It would also give the content creator ultimate control over the quality of the final product. Now how’s that for cool.

By the way, we are working on building a demonstration for this functionality for the NAB show in April. Details forthcoming!

Posted in JPEG2000, Standards | Leave a comment

JPEG2000 on Sand Hill Road

One of the benchmarks of technology adoption is when the venture capital market becomes aware of the opportunity and backs that position up with investment. So, what do VCs think of JPEG2000? Hard to say at this point as I have yet to see a public announcement around a VC based investment specifically guided towards JPEG2000. There have been a number of investments being made in digital cinema, but that too is rather difficult to classify.

Be it known, we have decided to test the VC waters at BroadMotion. We are strong believers in both JPEG2000 and the advantages our technology brings to the encoding/decoding process for the format. It’s always a tough call whether to pursue external investment because that process introduces new headaches to running a business. However, we decided that if our time-to-market could be accelerated then it was worth further exploration.

On the Road Again

I will not go into too many details about our specific activities primarily because we want to keep that private. But I can say that we have spent the past month or so pitching our plan and ideas to VCs located in Vancouver and the Valley (San Jose, Palo Alto, Menlo Park). In and of itself, this has been a great learning experience because it really forces you to refine your business model, optimize your value proposition and hone your presentation skills. We had an opportunity to pitch to many groups located in the Valhalla of venture capital…that being Sand Hill Road in Menlo Park.

If you ever get a chance to pitch a business plan to one of the top tier VCs in the Valley I highly recommended the experience. You will encounter some of the best technical minds with the business savvy to back it up. We could go into a long debate on the efficiency of the venture capital market, but its their money and their rules…go play the game. These guys are smart and you need to be extremely prepared on every detail of the market, your business and your technology.

Big Picture

But do they know much about JPEG2000? In terms of the technical specifics of J2K, I would conclude the VCs lacked knowledge of the format. But that’s to be expected, their focus is a few levels above that. More importantly they recognize a number of big trends that are making JPEG2000 an interesting technology play:

  1. Bandwidth as a point of friction is dropping rapidly (as is processing power)
  2. Video is being moved around from point to point and to lots of different devices, it’s now common behavior
  3. Video quality is becoming increasingly more important now that capable displays are within reach of everyday consumers

That said, the value proposition of JPEG2000 as it applied into specific solutions for various markets is really quite compelling. Does that make it viable for investment…time will tell. If it was 1999 and you had the ability to invest in an online music service that was coupled with a trendy and expensive device would you take it?

P.S. – Good Eats and Drinks

Special Kudos to Vino Locale in Palo Alto (www.vinolocale.com). We snuck into this beautiful Victorian house after a long day of meetings. Randy, the owner, introduced us to the wine producers of the Santa Cruz valley. He made fantastic selections for us that we nicely paired with small plates that were absolutely delicious. If you get into the Palo Alto for dinner, make a stop there and tell Randy that Chris and Jeff sent you.

Posted in JPEG2000, Standards | Leave a comment

JPEG2000 – That’s JPEG, right?

The most common question I get asked about JPEG2000 is, “That’s JPEG, right”? It’s a natural question really; I’d probably assume the same thing if I wasn’t in the image compression business. And as most people aren’t “in” the image compression industry, that’s why it gets asked a lot. In my opinion, it is the single biggest reason why JPEG2000 has not taken off the way it should, but that’s slowly changing.

So why did they call it JPEG2000? JPEG stands for Joint Photographic Experts Group, which is the name of the committee responsible for the formation of the original JPEG standard. Work on JPEG began in 1982, culminating with the publishing of the standard in 1992 after which people started widespread adoption.

The foundation of JPEG compression is based on DCT (Discrete Cosine Transfer), which at the time, was “cutting edge” in compression theory and technology. JPEG has enjoyed tremendous success since; it is the most widely used compression format for continuous tone (full color) images on the planet. It has found itself at home on the web, handheld devices such as digital cameras, cell phones, iPods, scanners, printers and just about anywhere you have full color images. And because of it’s widespread use and adoption most people know the name and are familiar with it. So, when they hear “JPEG2000”, they naturally assume “That’s JPEG, right”?

Regarding JPEG2000, the only thing similar between it and JPEG is that the same ISO committee was involved in its development and standardization! The underlying technology at the heart of JPEG2000 is based on DWT (Discrete Wavelet Transform) compression which is fundamentally different from the DCT used in JPEG. In fact, it is an order of magnitude more complex and offers a rich set of features not found in JPEG or in any other ISO still image formats.

JPEG2000 is to JPEG as the internal combustion engine was to the steam engine. The steam engine got things started but the world really changed after the internal combustion engine gained widespread use.

JPEG2000 solves many of the limitations found in standard JPEG. It has interactive client/server features, functions extremely well in noisy environments such as wireless channels, offers both lossless and lossy compression in the same codec and on average yields 40% smaller file sizes when compared to JPEG images compressed to the same quality level. So all the shortcomings of standard JPEG are solved with JPEG2000 and then some.

So the ISO committee did a great job on the technology but missed horribly on the name. In the long run, I do not think the name will be a factor but clearly in the short term it sure has caused a lot of confusion in the marketplace.

Could they have chosen a different name which may have resulted in less confusion and helped drive adoption? I think so, and most people I have spoken with in the JPEG2000 business all agree. The similarity in names has resulted in the need for continual market education and that is always an expensive process!

So the next time someone asks you “Have you heard of JPEG2000”, you’ll know not to answer “That’s JPEG, right”?

Posted in JPEG2000, Standards | Leave a comment