Comment

What is VR/AR/MR ?

CEO and Founder of DAQRI, Brian Mullins, helps clear the confusion between augmented reality (AR), virtual reality (VR), and mixed reality (MR). Stay tuned for exciting updates by connecting with us! http://www.twitter.com/DAQRI http://www.facebook.com/daqriAR http://www.linkedin.com/company/daqri http://www.instagram.com/daqri

Comment

Comment

Image Matters ORIGAMI Module B20 Video Interview

Image Matters develops innovative, high-performance platforms for extreme imaging applications. The ORIGAMI Module B20 is a future-proof video production solution, designed to provide extreme connectivity and processing power to advanced video applications, with the flexibility to scale to meet future interface requirements.
Video presentation of the Origami B20 module @NAB 2016

Comment

Comment

You Deserve Better than Grainy Giraffes

If you’ve spent any time on social media over the past six weeks, you’ve almost certainly seen posts about April the Giraffe, a resident of Animal Adventure Park in Harpursville, New York.

When the park announced that April was pregnant with what would be the zoo’s first ever baby giraffe, the internet took notice. Since February, with April’s fifteen-month-long gestation period drawing to a close and delivery imminent, eager fans have been able to monitor the expectant mother’s progress via the GiraffeCam, a 24/7 live video stream from April’s enclosure. Before long, April had gone viral, complete with a Twitter account, a #giraffewatch hashtag, a spoof video, and even an elaborate April Fool’s Day conspiracy theory

As I was checking in on April today—never let it be said that anyone involved with NGCodec is not up-to-the-minute on important current events—I found myself thinking how much nicer it would be if the video quality of the live stream were better. Exciting though the stream is, it’s undeniably a bit grainy and it tends to freeze. Considering how long we’ve been waiting for this birth, I think I speak for all of us when I say that we’d like to see it clearly.

The NGCodec team has been doing a lot of work lately on improving the quality of live video through the use of FPGA hardware encoding. You may recall our announcement late last year about porting our RealityCodec NGCodec H.265/HEVC video encoder to the Amazon Web Services (AWS) Elastic Compute Cloud (EC2) F1 Instances. What this milestone signifies, practically speaking, is that we are making strides toward significantly improving live video encoding and the quality of the resulting video. The following excerpt from our recent white paper, “Live Video Encoding Using New AWS F1 Acceleration: The Benefits of Xilinx FPGA for Live Video Encoding,” gives an overview of the current status of live video encoding using software and addresses the many ways in which hardware encoding offers significant advantages over these methods:

In a live video broadcast over the internet, a single video stream is sent from the source to the cloud. It is then transcoded—decoded in the cloud and re-encoded into multiple bit rates for ABR—before being sent on to the end viewer. Today, this is achieved purely through software, typically open source encoding projects such as x264 or x265, using many central processing units (CPUs). The difficulty with this approach for live video is that there is a limit to the amount of parallelism that can be exploited to make the video smaller; this capability is defined by the number of cores within the server in question. Because the frames per second (FPS) must be maintained to avoid jerky playback, the computing requirement must not exceed this FPS at any time. As such, the highest quality settings in the software encoder cannot be used. For our purposes, we will look at the x265 open source software video encoder as an example. 

Encoding software like x265 contains a great many presets, allowing the user to customize settings and trade overall computing requirements for the end size of the video. For file-based videos, this technology can produce very high- quality results with the x265 ‘veryslow’ preset: the video can be encoded many times longer than real-time constraints allow, yielding the best compression, but with considerable cost of encoding resources. 

For live video, by contrast, software encoding is simply unable to achieve the maximum quality offered by the encoder technology. Fig. 2 compares 1080p50 source video encoded with different x265 presets (for video quality) and the resulting frames that can be encoded per second on the AWS c4.8xlarge instance type. The necessary tradeoffs to satisfy computing requirements mean significant reductions in quality. Instead of running the encoder at a slow setting, which will produce the best end quality, sacrifices are necessary to achieve the target frame rate. The fundamental problem with software-based encoding for live videos is that the best compression—that is, the highest quality video for the lowest bit rate—and the finest end result in video quality are unattainable with the available compute level. By comparison, NGCodec’s encoder can achieve 80 FPS and surpass the quality of even the x265 ‘veryslow’ preset.

[...] For live video, the primary benefit to video encoding with FPGA F1 instances is that we can achieve a higher quality video at the same bit rate, and do it at a desirable 60 frames per second. A second benefit, relevant only in certain cases, is lower latency and reduced lag time between live stream source and end viewing. Third, the cost of encoding is significantly reduced. Finally, we can support up to 32 independent encoded streams of video on a single F1 instance.

In a practical sense, the gain is ultimately that NGCodec can enable customers to achieve a higher-quality video by taking advantage of the greater compute capability of an FPGA. We are able to reduce source video to 0.13 percent of its original value with virtually no perceived loss of quality.

You can download the full-length white paper by clicking here.

I’m confident that hardware encoding with FPGAs is going to be a game-changer for live video, enabling significant increases in picture quality and improved stream fluidity. My only regret is that we won’t be able to take advantage of it in time to celebrate the arrival of April’s baby in glorious high definition. You deserve better than grainy giraffes. We all do. 

Comment

Comment

Live Video Encoding Using New AWS F1 Acceleration

In today’s mobile world, where live video is rapidly gaining ubiquity in everyday life, NGCodec is leading the charge to overcome the difficulties and sacrifices in quality associated with video encoding using traditional software methods. This white paper discusses the benefits of hardware encoding using Xilinx® FPGA in the new Amazon Web Services (AWS) F1 instances. We open with a background on video encoding and an overview of the encoding process. This is further contextualized with a discussion of the applications of cloud video transcoding and an exploration of the differences between file-based and live video encoding. Following on from this, we explore the limitations of traditional software encoding methods for live video encoding. Having established the drawbacks of relying on CPUs and GPUs, we discuss the superior results that can be obtained through hardware encoding with AWS FPGA F1 instances. Our paper goes on to delve into the methodology behind NGCodec’s FPGA F1 design using the Xilinx Vivado® HLS tool suite and to summarize how we ported our RealityCodecTM H.265/HEVC video encoder to AWS Elastic Compute Cloud (EC2) F1 instances in only three weeks. Finally, the paper outlines our roadmap for a new, twofold business model to make hardware encoding with FPGA F1 instances available to customers of all sizes and closes with an opportunity for readers to try out NGCodec’s video encoding capabilities for themselves.

Comment

Comment

Guest Post: Virtual Reality – Virtual standstill

VR seems only to be popular at trade shows. 

  • After a very disappointing 2016, Virtual Reality looks set to have another disappointing year in 2017 while its proponents work out how to fix the issues that keep it from being a success.
  • The latest blow is the removal of 200 out of 500 of Oculus Rift demonstration stations as a result of poor performance within stores.
  • The idea has been that to get a user to buy VR, he has to try it but in some stores entire days have gone by without a single demo being given.
  • Best Buy will continue to range the Oculus Rift but the real estate given up will be re-used for products that produce better sales per square foot.
  • It appears that the only place where people queue for a demo is at trade shows with the regular user not really seeing the point of the technology.
  • This is a further indication that the limitations of VR continue to hamper its appeal.
  • These remain:
    • Price: Many of the devices cost several hundreds of dollars and also require a PC to run, further increasing the cost.
    • Clunky: VR and AR units are still large, clunky and uncomfortable to wear.
    • In many cases they also make the user feel foolish when wearing one.
    • Comfort and security: VR in cuts the user off from almost all sensory inputs from his immediate environment severely limiting the situations in which the user would feel comfortable using one.
    • Many units also cause feelings of nausea due to an imperfect replication of the real world compared to what the brain is expecting.
    • Cable: Many units require an HDMI cable which prevents the user from moving and also increases the risk of a fall should the user trip over the cable.
    • Content: Both games and content remain in short supply limiting the reasons for users to immediately adopt the platform.
    • The adult entertainment industry is a good yardstick for the adoption of new media types and even this has been slower than expected to jump in.
  • The net result is that I think 2017 will be a disappointing year for VR.
  • The one bright spot remains augmented reality (AR) to enterprise customers. 
  • For the enterprise, it is productivity that really matters with the user experience being less important.
  • This is because in consumer, the users pay money for an experience but in the enterprise users are paid to use the technology.
  • Hence, enterprise users’ willingness to put up with a substandard user experience is much greater.
  • The AR user experience is still miles from where it needs to be but critically it does offer productivity improvements that have led to many companies trialling it particularly for employees in the field.
  • Hence, I think that AR in the enterprise should see both unit shipment growth as well as good growth in revenues from software and services in 2017.
  • Consequently, the companies to watch this year are those in this field like ODG, Microsoft HoloLens, Meta, Atheer Labs and of course Magic Leap.
  • Magic Leap is an exception is it has made incredibly bold promises around a consumer offering in AR, but it is questionable as to how close it really is in terms of having a working, commercial product (see here).
  • From an investment perspective, AR in the enterprise is the only place I would entertain putting money into this year unless it is something aimed at fixing the limitations I have listed above.

Sign up here to get Richard excellent daily newsletter. 

 

Comment