Skip to main content

Panda’s new clothes

The neglected child…

New user interface for our web application has been long overdue. We focused so hard on improving Panda core features that we neglected the UI. A bit. Today is the day to make it up to you. New, updated and better Panda UI has arrived.

You’re probably very well aware that old UI was okay-ish but far from perfect. We decided to take it easy and choose evolution instead of revolution. Changing users habits and workflows is always sensitive and tricky business. That’s why we think it makes more sense to introduce changes gradually.

 

Nt6DBzFkSwuNs63f9PFS
Video list view

 

 

Our main goal for first roll out was to deliver cleaner, simpler UI that makes a better use of screen space. Based on your feedback we added small changes for those of you that transcode large volumes. We’ve added simplified list view for videos and profiles to make it easier to browse them. We unified the application behavior, to make sure configuration of key Panda features is always done the same way.

 

Cm72WQ57RTyXQabAkR3P
Profiles list view

 

 

Front-end piece of the app is now based on well known and proven AngularJS framework.

What’s next?

Expect more changes in the coming weeks. We’re working on better, more detailed encoding analytics. That’s one of more requested improvements and we’re happy to oblige.

Current console will get overhaul as well to make it more useful. Brand new piece of Panda – Live Transcoding –  will be getting it’s own piece of UI (it’s in beta now). And of course, there’ll be a number of small tweaks improvements that at first may go unnoticed but will make your work with Panda both, easier and more fun.

We would love to hear what you think. What could have been done better? Did we miss something?

Stay tuned!

On bears and snakes. Panda has updated Python library.

We usually don’t want to deal with complicated APIs, protocols and requests. Straightforward, clear way of doing things is preferred and it’s usually the best to hide raw communication and all the technical details under a simple interface. The structural organization of most successful systems is based on several layers of abstraction. The higher you are the less control over things you have but then such level of control is often dispensable in favor of simplicity.

Panda communicates with the rest of the world using several endpoints, related to particular entities it works with, like clouds, notifications and videos. Each of these endpoints can be reached using HTTP requests. Depending on their type (POST, GET, DELETE or PUT) and arguments various operations are executed. It could be modifying an existing profile, deleting a video or creating a new cloud. All these requests need proper timestamps and the right signature to pass the verification. To save you from managing it all on your own, several client libraries are used.

New Python library

We just wanted to let you know that Python library in Panda has been updated to make integrations much easier. So far it only offered basic functionality like signature generating. You still had to provide both, an endpoint location and a HTTP method to send a request, and then parse returned JSON data on your own. It’s no longer needed in the next version of the package, which introduces a new, simpler interface, based on the one provided by the Ruby language gem. Returned information is now stored in dictionary-like objects, which makes it easier to inspect. Also, you don’t have to input API endpoint locations and proper HTTP method types to interact with your data.

Resumable upload is there

Finally, a support for a resumable upload was added. If you send a file using a basic POST request, you don’t have a chance of resuming an upload in case of a connection failure. It is especially annoying if happens at the end of uploading large multimedia file. In such case, even through several gigabytes have already been sent, you have to start all over again. 

Panda offers another, much better approach, and allows you to create uploading session. The old version of the library only returned the endpoint address and left all the work up to you. The new one is now capable of managing the session using simple, easy to remember set of methods. You don’t have to calculate offsets and positions in a multimedia file anymore to ensure that it will be sent in one piece.

The backward compatibility with previous version is also preserved. If you prefer, you can still use the old way and call specific HTTP methods manually.

With the new library and thanks to power of Python you can easily write clear, robust, elegant and maintainable code. And that’s the fun part, isn’t it?

GitHub repo and examples

XDCAM preset streamlined in Panda

XDCAM is a series of video formats that are widely used in the broadcasting industry, you might also know them as MXF. Sony introduced them back in 2003, and since then they’ve become quite popular among video professionals. It has always been possible to encode to XDCAMs in Panda through our raw encoding profiles, but we’ve decided to make it more streamlined. Oh, and, by the way, to make their quality possibly best in the industry.

And here it is, the new preset to create XDCAM profiles. Everything can be set up using Panda’s UI. Because XDCAMs only allow a predefined set of possible FPS values, we decided that it would be a good idea to always use our motion-compensated FPS conversion for XDCAM profiles (more on Google’s blog). If your input video’s frame rate doesn’t match that used by the XDCAM preset, or if it is progressive and you need interlaced outputs, the quality won’t degrade as much as it would without motion compensation. And that’s what gives our preset the best quality in the cloud video encoding industry.

 

Adding XDCAMs to your profiles is super easy now.
Adding XDCAMs to your profiles is super easy now.

Should you have any questions or suggestions regarding the new presets – just shoot us an email at team@pandastream.com.

ISO base format: “ftyp” box

Curious clients like the ones we have in Panda are such a joy to work with. One of them sent us a great question about ftyp headers in MPEG-4.

ISO base media file format (aka MPEG-4 Part 12) is the foundation of a set of formats, MP4 being the most popular. It was inspired by the QuickTime’s format, an then generalized in an official ISO standard. Then other formats used this ISO spec as a base and added their own functionality (sometimes in an incompatible way). For example, when Adobe created F4V – the successor of FLV –  it used MPEG-4 Part 12 too but needed a way of packing ActionScript objects into the new format. Long story short,  F4V turned out a weird combination of MP4 and Flash.

Anyway, all MPEG4 Part 12 files consist of information units known as ‘boxes’. One of the kinds of these boxes is ftyp, which contains information about the variant of ISO-based format the file uses.  In general, every new variant should be registered on mp4ra.org, but that’s not always the case. Full list of possible ftyp values (registered and non-registered)  is maintained on ftyps.com website.

Majority of MP4 files produced by Panda will be labelled as ISOM (the most general label), but you might want to use a different label. Instead of ISOM you might for example use MP42, which is MP4 version 2 and does add a few things to the ISO base media file format, so different labels actually make sense.

These low-level MPEG4 Part 12 details can be easily manipulated using GPAC, which fortunately is available in Panda. Assuming that you’re already using raw neckbeard profiles, to change the ftyp of a file from ISOM to MP42 after it’s processed by FFmpeg, you could use following commands:

ffmpeg -i $input_file$ ... (your other FFmpeg arguments here) ... -y tmp.mp4
MP4Box -add tmp.mp4 -brand mp42 $output_file$

PS. Any time you’d like to use more than one command in a single Panda profile, join them either by ‘;’ or a newline.

Panda Corepack-3 grows bigger and better

As you probably know you can create advanced profiles in Panda with custom commands to optimize your workflow. Since we are constantly improving our encoding tools an update could sometimes result in custom commands not working properly. Backwards compatibility can be tough to manage but we want to make sure we give you a way to handle this.

 

That’s why we made possible to specify which stack to use when creating new profile. Unfortunately, the newest one – corepack-3 – used to have only one tool, ffmpeg. It was obviously not enough and had to be fixed so we extended the list.

 

What’s in there you ask? Here’s short summary:

  • FFmpeg – a complete, cross-platform solution to record, convert and stream audio and video.
    http://ffmpeg.org
  • Segmenter Panda’s own segmenter that divides input file into smaller parts for HLS playlists.
  • Manifester – used to create m3u8 manifest files.

 

Of course, this list is not closed and we’ll be adding more tools as we go along. So, what would you like to see here?

 

A case for MPEG DASH

In an always competing IT world there are many rivaling groups of skilled developers who independently try to solve the same problems and implement the same concepts. It usually results with a vast choice of possible solutions that share a lot of common traits. This abundance of techniques, methods and protocols is one of the things that allowed the rise of modern software.

However, it can also be a burden because one needs to provide the support for multiple technologies instead of being able to focus on one. The worst thing that can happen is a set of incompatible mechanisms that need to be separately served within the application, language or library. A good example – legendary browser wars that we had in the 90s. Both Microsoft and Netscape developed their own unique features that weren’t supported in the competitor’s product which brought a ton of problems for web developers who wanted their web pages rendered the same way. Even now it’s common to use various JavaScript libraries like YUI and jQuery to fix issues related to legacy browsers.

That’s why standards are important

They provide a well defined core that needs to be implemented among all vendors. It makes the constant struggle for portability a bit easier. Standard shifts the responsibility: now developer doesn’t have to worry about every possible type of a user software and include several tests for special cases. He doesn’t need to write extra code just to handle a single task done differently in a different environments. He can improve the support of a single protocol instead of working with five. It’s now a vendor’s job to provide a product that works with code compliant to the specification.

Unfortunately, creating a standard is not a simple task and there is a lot of problems in order to satisfy all the needs and cases. A clash between proposals is hard to avoid. It takes time to have one victorious solution emerge and dominate the market.

Divided world of adaptive bitrate streaming

Such strife can be observed now in a world of multimedia streaming techniques. There are three competing HTTP based methods, referred as adaptive bitrate streaming –  Apple’s HLS, Microsoft HSS and Adobe HDS. These 3 provide the way to transmit multimedia with bitrate that can be changed dynamically, depending on network bandwidth and hardware capabilities.

They are similar but occupy a different parts of the market. HSS is present in Silverlight based applications, HLS is in a common use among mobile devices and HDS is popular companion of Flash on desktop. It would be a lot easier for developers to have one common technology to support instead of 3 separate ones. That’s why there were attempts to standardize the adaptive bitrate streaming.

Enter MPEG DASH

The MPEG group, major organization that contributes commonly used multimedia standards, introduced their own version of HTTP based streaming called MPEG DASH, that strives now to become a dominant method for delivering rich video content.Right now MPEG DASH is far from being a champion and the only preferred choice. HDS, HLS and HSS are still commonly used across the Internet. It’s hard to predict if it’ll prevail. All that’s certain is it should not be ignored. There’s a little merit in waiting for a winner to rise victorious from this clash of technologies. That’s why we decided to enhance Panda with a DASH support.

We provide this feature through a new preset that is available to choose from a profiles list. Similar to a HSS support that we introduced recently, there are two ways the encoded set of output files can be stored in the cloud. If the default .tar extension is preserved then both multimedia files and XML manifest with all necessary metadata are archived into a single file, which can be later downloaded and unpacked. Alternatively you can choose a .mpd which makes Panda upload all output files into the cloud separately.

Another important decision to make is a set of output bitrates. The default setting consists of bitrates with values of 2400k, 600k, 300 and 120k. Changing the video bitrate value through the preset settings panel results with values equal to the one you set, 1/2, 1/4 and 1/8 of it (just like with HSS which we introduced before).

To test your output you can use one of these media players:

This new preset allows you to adapt our product to your needs more flexibly. There is a saying that “nobody ever got fired for buying IBM equipment” because when in doubt one should choose what is a standard for the industry. If you want to provide your application with features that modern streaming techniques offer then choosing MPEG DASH might be a good option. Panda is there to help getting your videos encoded the right way.

All profiles are equal…

…. but some profiles are more equal than others. At least they might be to you. We do realize that Panda is basically all about video transcoding and its core tasks are pretty much the same for most of you. However, good services should be customizable so they can fit specific user requirements.

If your business relies on specific output files more heavily than others you may want them to be encoded first. Until now you could do this by setting priority on your encoding clouds and sending more important files to the one with high priority.

Setting priority on encoding cloud
Setting priority on encoding cloud

 

This solution while generally good, had a limitation if you wanted to have specific profile encoded before others. Let’s say you need high quality MP4(H.264) encoded as your primary profile while the smaller, lower quality MP4s could follow later on. It required you to upload files twice to two different encoding clouds. Definitely not fun, as one of our clients pointed out.

We had to do something about it. And we did. Couple days later and now also profiles can have a priority setting of High, Normal and Low so you can control the order in which the jobs for specific profiles are processed.

 

Setting priority on encoding profile
Setting priority on encoding profile

Your Panda workflow just got even more flexible.

Have a nice weekend!

 

Panda gets more secure with AES-128 encryption for HLS

Yay, it’s Friday afternoon! Here’s something short before you go and enjoy your weekend.

We have just added in Panda an ability to provide our customers with encryption of the HLS protocol. This means improved transmission security and that you get to control who can view your videos. We help you encrypt your videos either by generating a 128-bit key along with initialization vector or using a key provided by you.

In short – we encrypt all the segments of your video found in an associated HLS variant playlist. You will be able to choose a URL which contains your secret key to decrypt them with your media player. HLS Variant and HLS Variant audio profiles will now contain Encryption section to help you set everything up in seconds.

To make sure the secret key from URL is properly protected, you will need to authorize the media player to obtain the key using your web authentication service.

You can find out more on our documentation page

Panda adds streaming with Microsoft HSS

You could have the most astounding video content, in high resolution and with amazing quality, enhanced with all sorts of special effects and advanced graphical filters – it doesn’t matters if you aren’t capable of delivering it to your consumers. Their connection speed is often limited and they might not have enough bandwidth to receive all these megabytes filled with rich multimedia data. While our networks are improving at an astonishing rate, they’re still the main bottleneck of many systems, as the size of the files rises rapidly with better resolutions and bitrates. While you can add several more cores to your servers to increase their computing power, you’re not able to alter the Internet infrastructure of your users. You have to choose – send them high definition data or sacrifice the quality to make sure the experience is smooth.

Continuous streaming vs Adaptive bitrate

The most obvious solution is to prepare several versions of the same video and deliver one of them depending on user bandwidth. In the past this was a standard approach, once the choice was made, files were streamed in a progressive way, from the beginning till the end, just like images in modern web browsers.

This method has several disadvantages. The biggest one – you cannot dynamically switch between versions in the middle of the sending process in order to react to changes in the network load. If the connection improves you can’t take advantage of it and you have to continue sending worse quality despite having resources available. Even worse, you can’t prevent congestion if the transmission speed decreases – you either cancel the entire process or end up with the video becoming laggy. With continuous streaming you also can’t just skip part of the multimedia and jump ahead until the downloading process gets to the desired moment neither can you rewind data quickly.

To fix these issues better a more flexible solution is needed. For a long time the preferred choice was Adobe RTMP (Real-Time Messaging Protocol) used together with Adobe FMS (Flash Media Server). It was complex and became problematic in the era of mobile devices, since their support for Flash based technologies is pretty average. This allowed HTTP based protocols to emerge and dominate the market.

These technologies split video and audio into smaller segments which are encoded with different bitrates. It allows to dynamically choose optimized data based on current connection speed and CPU. It’s called adaptive bitrate streaming. 

HTTP has a number of advantages compared to RTMP

  • it’s a well known, simple, popular and universally applied protocol
  • it can use caching features of content delivery networks
  • it manages to traverse firewalls much easier.

Microsoft HSS, underrated protocol

As of now there is no single, standard HTTP-based protocol, instead there are several implementations by different vendors. One of them, Apple’s HLS, was available in Panda for a long time. Now we’re adding another one – HSS (HTTP Smooth Streaming), a Microsoft technology which allows the use of adaptive bitrate streaming features in Silverlight applications. Even though that with the advent of HTML5 Silverlight is not as popular as it used to be (over 50% market penetration in 2011) it’s still a widely spread, common technology and a noteworthy rival of Flash.

To use HSS specialized server is needed. The most obvious choice would be Microsoft’s IIS but there are modules for Nginx, Apache Httpd and Lighttpd as well. After setting it up, together with a Silverlight player, you need to split your video files into data segments (files with .ismv extension) and generate manifest files (.ism and .ismc extensions), which are used to inform receivers what kind of content the server can deliver.

HSS preset in Panda

This is where Panda comes in handy as a convenient encoding tool. All you have to do is add HSS preset to your set of profiles and configure it as needed to get a pack of converted files ready to deploy. The most important setting is an output file format. With a default ‘.tar’ extension you will receive at the end of the encoding process a single, uncompressed archive which contains all necessary data. All that’s left is to unpack this archive into the selected folder of your video server and then provide your Silverlight player with a proper link to a manifest file. You can alternatively choose ‘.ism’ format, which won’t archive the output. Instead files will only be sent to your cloud, from where you can use them any way you need.

 

ydD6yeL8Qoi3IWOxfeJF

 

Another important thing to consider is a video bitrate value for your segments. The default settings produce segments with bitrates of 2400k, 600k, 300k and 120k. If you insert custom value, your output will consist of segments having a bitrate equal to the provided value, 1/2, 1/4 and 1/8 of it. Finally, you might alter the standard set of options such as resolution and aspect ratio.

Now you can have the benefits of adaptive bitrate streaming if your business uses Microsoft technologies. All you have to worry about is content quality, since the problems with delivery are becoming less of a burden, all thanks to the abilities of HTTP protocol.

 

Frame rate conversion with motion compensation

Here at Panda, we are constantly impressed with the requests that our customers have for us, and how they want to push our technology to new areas. We’ve been experimenting with more techniques over the past year, and we’ve officially pushed one of our most exciting ones to production.

Introducing frame rate conversion by motion compensation. This has been live in production for some time now, and being used by select customers. We wanted to hold off until we saw consistent success before we officially announced it 🙂 We’ll try to explain the very basics to let you build an intuition of how it works – however, if you have any questions regarding this, and how to leverage it for your business needs, give us a shout at support@pandastream.com.

Motion compensation is a technique that was originally used for video compression, and now it’s used in virtually every video codec. Its inventors noticed that adjacent frames usually don’t differ too much (except for scene changes), and then used that fact to develop a better encoding scheme than compressing each frame separately. In short, motion-compensation-powered compression tries to detect movement that happens between frames and then use that information for more efficient encoding. Imagine two frames:

Panda on the left...
Panda on the left…
...aaand on the right.
…aaand on the right.

Now, a motion compensating algorithm would detect the fact that it’s the same panda in both frames, just in different locations:

First stage of motion compensation: motion detection.
First stage of motion compensation: motion detection.

We’re still thinking about compression, so why would we want to store the same panda twice? Yep, that’s what motion-compensation-powered compression does – it stores the moving panda just once (usually, it would store the whole frame #1), but it adds information about movement. Then the decompressor uses this information to construct remaining information (frame #2 based on frame #1).

That’s the general idea, but in practice it’s not as smooth and easy as in the example. The objects are rarely  the same, and usually some distortions and non-linear transformations creep in. Scanning for movements is very expensive computationally, so we have to limit the search space (and optimize the hell out of the code, even resorting to hand-written assembly).

Okay, but compression is not the topic of this post. Frame rate conversion is, and motion compensation can be used for this task too, often with really impressive results.

For illustration, let’s go back to the moving panda example. Let’s assume we display 2 frames per second (not impressive), but we would like to display 3 frames per second (so impressive!), and the video shouldn’t play any faster when we’re done converting.

One option is to cheat a little bit and just duplicate a frame here and there, getting 3 FPS as a result. In theory we could accomplish our goal that way, but the quality would suck. Here’s how it would work:

Converting from 2 FPS to 3 FPS by duplicating frames.
Converting from 2 FPS to 3 FPS by duplicating frames.

Yes, the output has 3 frames and the input had 2, but the effect isn’t visually appealing. We need a bit of magic to create a frame that humans would see as naturally fitting between the two initial frames – panda has to be in the middle. That is a task motion compensation could deal with – detect the motion, but instead of using it for compression, create a new frame based on the gathered information. Here’s how it should work:

Converting from 2 FPS to 3 FPS by motion compensation: panda is in the middle!
Converting from 2 FPS to 3 FPS by motion compensation: panda is in the middle!

 

These are the basics of the basics of the theory. Now an example, taken straight from a Panda encoder. Let’s begin with an example of how frame duplication (the bad guy) would look like (for better illustration, after converting FPS we slowed down the video, and got slow motion as a result):

 

See that jitter on the right? Yuck. Now, what happens if we use motion compensation (the good guy) instead:

 

It looks a lot better to me, the movement is smooth and there are almost no video artifacts visible (maybe just a slight noise). But, of course, other types of footage are able to fool the algorithm more easily. Motion compensation assumes simple, linear movement, so other kinds of image transformations often produce heavier artifacts (they might be acceptable, though – it all depends on the use case). Occlusions, refractions (water bubbles!) and very quick movement (which means that too much happens between frames) are the most common examples. Anyway, it’s not as terrible as it sounds, and still better than frame duplication. For illustration, let’s use a video full of occlusions and water:

 

Okay, now, let’s slow it down four times with both frame duplication and motion compensation, displayed side-by-side. Motion compensation now produces clear artifacts (see those fake electric discharges?), but still looks better than frame duplication:

 

And that’s it. The artifacts are visible, but the unilateral verdict of a short survey in our office is: the effect is a lot more pleasant for motion compensation than frame duplication. The feature is not publicly available yet, but we’re enabling it for our customers on demand. Please remember that it’s hard to guess how your videos would look like when treated with our FPS converter, but if you’d like to give it a chance and experiment a bit, just drop us an email at support@pandastream.com