Skip to main content

Easy Telestream Cloud integration with LiveSync

What is LiveSync?

We have just added a new, easier way to integrate with Telestream Cloud. With LiveSync all you need to do is specify additional AWS S3 source bucket for new or existing factory and we take care of the rest. Anytime you add a new file to it we will encode it automatically to all output profiles assigned to the factory.

LiveSync allows you to batch process videos without having to write any code or make API requests. It also comes with an option to back-synchronize files you might already have in your source bucket. Once you set up a factory and define output parameters exactly as you need them, with one click you can encode all previously existing files in source bucket. Once the files are encoded they will be delivered to your selected storage. We strongly recommend you make sure your output is exactly as you need it before enabling back synchronization.

It’s up to you to decide when to use LiveSync – either when creating a new factory or changing factory settings at any time.

It’s currently available on AWS only but we’re working on Google Cloud Storage implementation as well.

How to get started?

When creating a factory, select S3 as your storage option and identify your S3 source bucket. It can’t be the same bucket as the output bucket and has to use the same credentials to access it. Alternatively, you can turn on LiveSync in Factory Settings for any factory you’ve already created.

A3gm27nTE2gXR1nbFvUQ
Enabling LiveSync while adding new factory

We will need your Access Key and Secret Key only once a to let Telestream Cloud user access the S3 bucket. We don’t store the keys. Once configured, we will use AWS events to monitor the bucket, ingest and encode any new media files placed in it. The easiest integration ever.

You don’t need transcoding

Well, not always. Sometimes muxing might be a better option.

Muxing is the process of packing encoded streams into another container format while preserving your video and audio codecs. There is no actual transcoding or modifying your video streams. It is just changing the most external video shell.

 

Muxing at it's finest
Muxing at it’s finest

 

A few days ago we have added new preset in Panda called “HLS Muxing Variant”. You can easily guess what it does with the input video. The most important thing about transmuxing is that it takes less time comparing to traditional encoding to “HLS Variant” as it is not changing resolution, bitrate, etc. That’s why we priced it as low as ¼ of standard video minute, no matter the size or resolution of source video.

It may sound complicated so here’s an real life example. Let’s assume you have HQ source video and it is H264, AAC encoded video with 2000k bitrate. Re-encoding is always time consuming and and impacts quality so you can use transmuxing to only change format. You may say that HLS is adaptive streaming technology so you need more that one bitrate. You’re right! It is. You can create two other profiles for 1000k and 500k. And variants playlist as well.

Panda::Profile.create!({

:preset_name => "hls.muxer",

:bitrate => 2000, # this three values are for variant.playlist

:width => 1280,

:height => 720

})

Panda::Profile.create!({

:preset_name => "hls.variant",

:video_bitrate => 1000

})

Panda::Profile.create!({

:preset_name => "hls.variant",

:video_bitrate => 500

})

Panda::Profile.create!({

:preset_name => "hls.variant.playlist",

:variants => "hls.*"

})

Now you can send our HQ source video to Panda. The output will be 1 master playlist, 3 variants playlist and 3 groups of segments (and some screenshots). With these in place you are ready to serve your adaptive streaming content.

Give it a try. If you have any problems remember that we are here for you and we are always happy to help.

Easier, faster, better looking & still secure – API Tokens

If you’ve ever had to access Panda API by crafting raw HTTP requests or write your own Panda client library, you must know how annoying request signatures could be. They make communication very secure, but can be very inconvenient.

Building a signature was quite a complex, error prone task. And debugging wasn’t the most pleasant thing on earth either as the number of possible mistakes was huge. Each of them manifested in the same way – an error message saying that signatures mismatch has occurred.

Wouldn’t it be great to have another authorization method, whose usage would be as simple as copy & paste a string? Without compromising security. One that would be simple enough to make querying Panda from command line tools actually viable?

It bothered us as well so we decided to put some time into making everyone’s life bit easier. We came up with a solution that’s being used by a number of payment platforms. And these guys usually do care about security. If you’re using Panda API you will now be able to authorize yourself through an API Token instead of a signature.

There is one unique auth token per encoding cloud in Panda. You can check API Token for each cloud in our web application and generate new one if needed.

skpMpbhAQGX4eqctNp0W
API Token view

And now we can finally do what other services have been bragging about for a long time. We can have curl examples. YAY!

That’s how you send a file to Panda now (more examples in our docs):

curl -X POST -H "Content-Type: multipart/form-data" -F "file=@/path/to/file/panda.mp4" "http://api.pandastream.com/v2/videos.json?token=clou_lCTyUrw5eapr3rVE5vTOwlgxW&file=panda"

Response:

{
   "id":"524fb96a85e8cf0edbe5865d070539cc",
   "status":"processing",
   "created_at":"2015/07/17 15:33:47 +0000",
   "updated_at":"2015/07/17 15:33:48 +0000",
   "mime_type":null,
   "original_filename":"panda.mp4",
   "source_url":null,
   "duration":14014,
   "height":240,
   "width":300,
   "extname":".mp4",
   "file_size":805301,
   "video_bitrate":344,
   "audio_bitrate":112,
   "audio_codec":"aac",
   "video_codec":"h264",
   "fps":29.97,
   "audio_channels":2,
   "audio_sample_rate":44100
}

 

That’s all folks. Have a great weekend!

Closed Captions… what, why and how?

Closed captions have become inseparable part of any video. Making it possible to watch Scandinavian independent cinema. Helping the hearing impaired  experience the Game of Thrones as good as it gets. We all benefit from them.

Most video players have option to load subtitles from file. However, that means that if you want to deliver video with subtitles to your client, you’d have to send not only media file, but subtitles files too. What if they get mixed up? Or how to be sure that we have sent all available subtitle files to client? Fortunately there are other ways.

The first option is to burn subtitles into every frame of video. Sometimes it is needed for devices which can’t transform frames by themselves. Old TVs are a good example here. But it doesn’t mean we should be limited by old technology? Of course not. The second option is to use closed captioning. It allows to put multiple subtitles into one video file. Each of them will be added as a separate subtitle track. Now anyone who downloads video with closed captions embedded will be able to select which one to use or if disable them if not needed.

Closed captions are must have these days and we didn’t want to be left behind. So, there’s new parameter in H.264 preset which enables closed captioning. At the moment it is accessible only through our API but we are working on adding it to our web application. The parameter name is ‘closed_captions’ and the value can be set to:

  • ‘burn’ – with this setting Panda will take the first subtitle file from list and add subtitles to every frame
  • ‘add’ – with this setting Panda will put every subtitles from the list into separate track

Here’s a snippet of code in Ruby with an example how to use it:

Panda::Profile.create(
    :preset_name => "h264",
    :name => "h264.closed_captions",
    :closed_captions => "add"
)

Panda::Video.create!(
    :source_url => "VIDEO_SOURCE_URL",
    :subtitle_files => ["SUBTITLE_1_SOURCE_URL", "SUBTITLE_2_SOURCE_URL",  "SUBTITLE_3_SOURCE_URL"],
    :profiles => "h264.closed_captions"
)

Panda supports all major subtitles formats like SRT, DVD, MicroDVD, DVB, WebVTT and many more.

Thank you!

How profiles pipelining makes your life easier

Have you ever wondered if there is an option to encode your video and then use an encoded version of it as an input to new encoding? So far it hasn’t been available off the shelf, but it has been possible to get it using our notification system. But why should our customers have to take care of it by themselves?

So, what is profiles pipelining?

Let’s say you want to send a video to Panda and encode it using 3 profiles: “h264.1”, “h264.2”, “h264.3”, and then you want a video created within profile “h264.2” to be encoded using profiles “h264.4” and “h264.5”. And you also want to create output using profile “h264.3” which needs to be encoded using profile “h264.6”. But it’s not the end. To make it harder you also want to encode video created with profile “h264.5” using “h264.7”. Uhh, it can be hard to imagine what is going on, so for simplicity below you can see an image showing what I mean.

 

Pipeline Example
Example pipeline

 

 

First we need to describe it using JSON:

{
  “h264.1”:{},
  “h264.2”:{
    “h264.4”:{},
    “h264.5”:{
      “h264.7”:{}
    }
  },
  “h264.3”:{
    “h264.6”:{}
  }
}

 

And now we can send our request to Panda. Below is example Ruby code:


pipeline = {
  “h264.1” => {},
  “h264.2” => {
    “h264.4” => {},
    “h264.5” => {
      “h264.7” => {}
    }
  },
  “h264.3” => {
    “h264.6” => {}
  }
}


Panda::Video.create!(
  :source_url => “SOURCE_URL_TO_FILE”,
  :pipeline => pipeline.to_json
)

Now, when encoding is done, Panda will look if there is anything more to do next in the pipeline. If, for example, encoding to “h264.2” is done, it will become a new input for  “h264.4” and “h264.5” profiles and so on. Encodings created using pipelines will have additional field parent_encoding_id which can be used to find out what was the input used to encode or to reproduce pipeline with encodings instead of profiles.

If you have any problems with this new feature don’t forget that we are always here to help you.

Take care!

On bears and snakes. Panda has updated Python library.

We usually don’t want to deal with complicated APIs, protocols and requests. Straightforward, clear way of doing things is preferred and it’s usually the best to hide raw communication and all the technical details under a simple interface. The structural organization of most successful systems is based on several layers of abstraction. The higher you are the less control over things you have but then such level of control is often dispensable in favor of simplicity.

Panda communicates with the rest of the world using several endpoints, related to particular entities it works with, like clouds, notifications and videos. Each of these endpoints can be reached using HTTP requests. Depending on their type (POST, GET, DELETE or PUT) and arguments various operations are executed. It could be modifying an existing profile, deleting a video or creating a new cloud. All these requests need proper timestamps and the right signature to pass the verification. To save you from managing it all on your own, several client libraries are used.

New Python library

We just wanted to let you know that Python library in Panda has been updated to make integrations much easier. So far it only offered basic functionality like signature generating. You still had to provide both, an endpoint location and a HTTP method to send a request, and then parse returned JSON data on your own. It’s no longer needed in the next version of the package, which introduces a new, simpler interface, based on the one provided by the Ruby language gem. Returned information is now stored in dictionary-like objects, which makes it easier to inspect. Also, you don’t have to input API endpoint locations and proper HTTP method types to interact with your data.

Resumable upload is there

Finally, a support for a resumable upload was added. If you send a file using a basic POST request, you don’t have a chance of resuming an upload in case of a connection failure. It is especially annoying if happens at the end of uploading large multimedia file. In such case, even through several gigabytes have already been sent, you have to start all over again. 

Panda offers another, much better approach, and allows you to create uploading session. The old version of the library only returned the endpoint address and left all the work up to you. The new one is now capable of managing the session using simple, easy to remember set of methods. You don’t have to calculate offsets and positions in a multimedia file anymore to ensure that it will be sent in one piece.

The backward compatibility with previous version is also preserved. If you prefer, you can still use the old way and call specific HTTP methods manually.

With the new library and thanks to power of Python you can easily write clear, robust, elegant and maintainable code. And that’s the fun part, isn’t it?

GitHub repo and examples

XDCAM preset streamlined in Panda

XDCAM is a series of video formats that are widely used in the broadcasting industry, you might also know them as MXF. Sony introduced them back in 2003, and since then they’ve become quite popular among video professionals. It has always been possible to encode to XDCAMs in Panda through our raw encoding profiles, but we’ve decided to make it more streamlined. Oh, and, by the way, to make their quality possibly best in the industry.

And here it is, the new preset to create XDCAM profiles. Everything can be set up using Panda’s UI. Because XDCAMs only allow a predefined set of possible FPS values, we decided that it would be a good idea to always use our motion-compensated FPS conversion for XDCAM profiles (more on Google’s blog). If your input video’s frame rate doesn’t match that used by the XDCAM preset, or if it is progressive and you need interlaced outputs, the quality won’t degrade as much as it would without motion compensation. And that’s what gives our preset the best quality in the cloud video encoding industry.

 

Adding XDCAMs to your profiles is super easy now.
Adding XDCAMs to your profiles is super easy now.

Should you have any questions or suggestions regarding the new presets – just shoot us an email at team@pandastream.com.

ISO base format: “ftyp” box

Curious clients like the ones we have in Panda are such a joy to work with. One of them sent us a great question about ftyp headers in MPEG-4.

ISO base media file format (aka MPEG-4 Part 12) is the foundation of a set of formats, MP4 being the most popular. It was inspired by the QuickTime’s format, an then generalized in an official ISO standard. Then other formats used this ISO spec as a base and added their own functionality (sometimes in an incompatible way). For example, when Adobe created F4V – the successor of FLV –  it used MPEG-4 Part 12 too but needed a way of packing ActionScript objects into the new format. Long story short,  F4V turned out a weird combination of MP4 and Flash.

Anyway, all MPEG4 Part 12 files consist of information units known as ‘boxes’. One of the kinds of these boxes is ftyp, which contains information about the variant of ISO-based format the file uses.  In general, every new variant should be registered on mp4ra.org, but that’s not always the case. Full list of possible ftyp values (registered and non-registered)  is maintained on ftyps.com website.

Majority of MP4 files produced by Panda will be labelled as ISOM (the most general label), but you might want to use a different label. Instead of ISOM you might for example use MP42, which is MP4 version 2 and does add a few things to the ISO base media file format, so different labels actually make sense.

These low-level MPEG4 Part 12 details can be easily manipulated using GPAC, which fortunately is available in Panda. Assuming that you’re already using raw neckbeard profiles, to change the ftyp of a file from ISOM to MP42 after it’s processed by FFmpeg, you could use following commands:

ffmpeg -i $input_file$ ... (your other FFmpeg arguments here) ... -y tmp.mp4
MP4Box -add tmp.mp4 -brand mp42 $output_file$

PS. Any time you’d like to use more than one command in a single Panda profile, join them either by ‘;’ or a newline.

Panda Corepack-3 grows bigger and better

As you probably know you can create advanced profiles in Panda with custom commands to optimize your workflow. Since we are constantly improving our encoding tools an update could sometimes result in custom commands not working properly. Backwards compatibility can be tough to manage but we want to make sure we give you a way to handle this.

 

That’s why we made possible to specify which stack to use when creating new profile. Unfortunately, the newest one – corepack-3 – used to have only one tool, ffmpeg. It was obviously not enough and had to be fixed so we extended the list.

 

What’s in there you ask? Here’s short summary:

  • FFmpeg – a complete, cross-platform solution to record, convert and stream audio and video.
    http://ffmpeg.org
  • Segmenter Panda’s own segmenter that divides input file into smaller parts for HLS playlists.
  • Manifester – used to create m3u8 manifest files.

 

Of course, this list is not closed and we’ll be adding more tools as we go along. So, what would you like to see here?