Micro & Nano studio MOOC

Today we decided to create the ultimate mini studio for recording a MOOC with pro quality audio and video while on budget and in need of transportable setup.

The minimum we are looking for is recording the voice of our teacher a front camera and a tablette for his writing.

The Micro studio MOOC

The Micro Studio is a setup that will allow us to have 2 microphones like this two people could be recorded together.

dscf3464
Micro studio MOOC

 

 

Using the Atomos Ninja2 allows us to record the iPad Tablet

The tools we will use for the micro-studio are as follow:

  • 1x iPhone 5s
  • 1x iPad Pro 12.9”
  • 1x Apple pencil
  • 1x Apple HDMI adaptor (for our iPad)*
  • 1x Atomos Ninja 2
  • 1x SSD 256GB
  • 1 or 2 Lavalier microphone (XLR)
  • 1x iRig Pro Duo (for our iPhone)
  • 1x iRig Power Bridge

*Tips: you can use a power supply with the adaptor to charge the iPad. I would recommend to use the power supply of 29W for MacBookAir with an USB-C female connector and using the Lightning to USB-C cable from Apple. Doing this you will be able to recharge your iPadPro 12.9″ much faster than the original power supply provided by Apple.

Here is the schematic of our micro-studio

micro-studio-mooc-v1-0-schema-de-fonctionnement

 

The Software we will use is as follow:

For the iPad

  • 1x PDF Expert
  • 1x FilMic Remote
  • 1x Clapperboard
  • (1x FileBrowser)

For iPhone

  • 1x Filmic Pro
  • (1x FileBrowser)

FileBrowser is optional, but is a great tool to transfer files from your iOS device to a Cloud server or a computer with sharing options using only WiFi.

The workflow

  1. We start the Atomos recording first then the iPhone is launched with FilMic Pro (with the remote option enabled)
  2. Now Using the iPad we will start (or stop) our recording on the iPhone using the FilMic Remote
  3. Both recorders are now recording (iPhone + Atomos)
  4. We need to synchronise them. We will use Clapperboard to do this
  5. On the iPad we launch Clapperboard and clap it
  6. The image of the clap will be recorded to the Atomos
  7. The sound of the the clap on the iPad will be audible through the speaker of the iPad and our microphone will catch this sound and it will be recorded on the iPhone. Because we connected the iPhone audio jack OUT to the Atomos audio jack IN using a Jack male-male 3.5 (this will work only for earlier versions of the iPhone than the iPhone 7) the sound will also be recorded on the Atomos Ninja
  8. Then we can finally launch PDF expert on the iPad and start our MOOC

PDF expert is nice because it will send the image of our PDF without the tools around it.

The Nano studio MOOC

For the Nano-Studio we will replace the iRig Pro Duo and iRig Power Bridge with the iRig Pro I/O. This is nice for a lighter studio but we will have only 1 microphone. On the other hand we will have the power supply for the iPhone directly provided through the iRig Pro I/O. 

dscf3465
Nano studio MOOC

The tools we will use for the nano-studio are as follow:

  • 1x iPhone 5s
  • 1x iPad Pro 12.9”
  • 1x Apple pencil
  • 1x Apple HDMI adaptor (for our iPad)
  • 1x Atomos Ninja 2
  • 1x SSD 256GB
  • 1x Lavalier microphone (XLR)
  • 1x iRig Pro I/O (for our iPhone)

Here is the schematic of the nano-studio

nano-studio-mooc-v1-0-schema-de-fonctionnement

The workflow will be the same as the micro-studio

Pro

  • very light studio
  • possibility to use it on battery only
  • good sound quality
  • good image quality
  • the iPhone using FilMic Pro will give a .mov container with an AVC codec and can also record the video in Pro Log (in version 6) giving some nice possibility for light and color correction in post production
  • The Atomos record directly in ProRes format allowing us to easily work with it in a post production software (Final Cut Pro X or Premiere).

Con

  • The Atomos Ninja 2 (and Ninja Blade) can record only 1080p 24,25 and 30 fps but the iPad using the HDMI adaptor can only send an image in 60 fps (59.94) therefore the resolution can not be recorded in 1080p and the Atomos will force the iPad to revert the resolution to only 720p 60fps (59.94).
  • If we wanted to record the Tablet in 1080p we would need to use a different, more expensive recorder. For exemple the Video Device PIX-E5 or PIX-E7 or Atomos Ninja Flame
  • For the moment PDF expert outputs the slides via the HDMI adaptor with a black border around it, and not in full screen 16:9. I hope for an upgrade in the future.

Clip Exporter (3 month update)

So for the past 3 months here at CEDE in the EPFL we have been testing a small piece of software called Clip Exporter that can have a massive effect on the way we store and archive our footage. I have already written a more in-depth article on how we are using the software here. For this article I just wanted to give an update on whether Clip Exporter is still as good as we initially thought it was.

I have now completed the editing of a whole MOOC course implementing the extra steps with Clip Exporter. These extra steps do take time in the editing process and a slight adaptation in the way I edit my videos. Generally the first edit is only for eliminating superfluous footage like pauses, hesitation, bad takes etc. Then I export the XML from Final Cut and run the Clip Exporter software to separate all the clips. I import all these clips on a new time line in the same Final Cut project. Personally, I like to call this new timeline the ‘Light’ timeline so I can clearly see which version I am working on. It is only on this second edit that I apply effect and transitions as this information is not transferred with the XML file. This method has worked very well for me so far.

Whilst I am working on a MOOC project, which comprises of anywhere between 10 to 40+ videos, I don’t delete the original rushes just incase a teacher or presenter in the course wants to revert to a different take for a certain video. So in the rush folder for each video I will normally have the original rushes plus the ‘video’ folder with all of the clips from Clip Exporter. Once all of the videos have been validated and uploaded this means there are no more changes that are going to be made so at this point I can erase the original rushes. It is at this point I’ll reap the benefits of the Clip Exporter software. Here is the folder size information for the project I have just completed:

Screen Shot 2016-08-15 at 09.34.22

Here is project folder with all the original rushes plus the Clip Exporter clips.

Screen Shot 2016-08-15 at 09.38.15

Here is the same folder once all of the original rushes have been deleted from the folder.

So to break down the numbers:

Total size on server = 745.88 Gb

Size of deleted rushes = 552.30 Gb

Size of Clip Exporter Clips left on Server = 193.58 Gb

Total space saved with Clip Exporter = 358.72 Gb

% of space saved with Clip Exporter = 64.95%

As you can see from the breakdown I can make considerable space savings in my projects by using this one little piece of software. So after a few months of using the software on live projects I can definitively say that it has met all my expectations for what we want to use the software for, space saving.

If I could develop something to make Clip Exporter more effective for the way we use it, I would develop a Final Cut Pro plugin to directly export the clips and create a new ‘Light’ timeline from Final Cut. Or a more sophisticated XML exporter that included effects and transitions (obviously this would only work for FCP to FCP projects), but this would be something for Apple to develop not the Clip Exporter team so I’m not holding my breath for that!

Test camera AJA RovoCam 4K

This week we tested the new RovoCam from AJA to see if we could improve the recording of our hands in studio.

The problem: If we choose a better camera with a nice big sensor it will create moiré with the Wacom® Tablet 24 inches that we use. This is because the distance between the camera sensor and the tablet is just the perfect one to create interferences with each respective “pixels” grids (the sensor and the screen).

The solution: Use of a smaller sensor for the camera. Changing the distance of the camera and the tablet will not help because we would need to zoom the image to fill the screen with the tablet and it would result in the same ratio (pixels screen-sensor) and recreate the moiré.

Pros:

  • Nice image quality
  • No Moiré
  • HDBaseT camera! (accept 100m of cat6 cable distance)
  • 1080p50 and 1080p60

Cons:

  • No Zebra controls
  • Focus not constant when changing the zoom.
  • The minimal distance focus is changing drastically while zooming.
  • No way to easily set the focus manually. Normally you would zoom in, set the focus and then zoom out but because of the focus not constant and the focus distance restricted by the zoom, this is not feasible.
  • A manual focus helper in software would be great
  • Strange behaviour selecting the 1080i25 is accepted by the recorder (BlackMagic® DeckLink 2) as a 1080i50?
  • No 1080p25 or 1080p24
  • No 1080i50 selection (but see workaround)
  • RS232 to USB controls not working with ElCapitan, no way to use the RovoControl software with ElCapitan so no ability to setup the camera… unless you connect it to an older Mac OS (it was working on OS X 10.9)
  • RovoControl 1.0 seems to have trouble with changing parameters between Auto and Manual. I could not set the Shutter speed manually every times I selected a speed like 1/60 it jumped to 1/2000 by itself. Also, when I tried to set the focus to manual then switching through the tabs of the apps from Camera Control to Settings and come back to the Camera Control. The focus was reset to Auto. (*)

(*)Tips: by setting a preset, it seemed that the parameters was responding correctly.

 

Tests:

I checked an option allowing the camera to film in 4K and crop a point of interest of 1080p in the sensor called ePTZ. The option is nice but the image quality is impacted. I believe this is due to the fact that the sensor used in normal HD (1080p) will use the full surface of the 4K sensor (4K -> 1080p) and do an interpolation of the pixels that allow a clean image. When you use the ePTZ option you will have only the 1080p part in the 4K sensor and it will give a bigger signal/noise ratio deprecating the image quality.

The Settings for the hands recording:

the red cable is a 50m long network cable connected to the receiver here. I also needed to convert the HDMI to SDI because the BlackMagic card accept only SDI inputs.

IMG_4186

Images result of the hands:

The settings for the focus and color measurements:

I mounted the grids and measurement from our Spider box on a hotshoe slider then fixed it on a light stand. Like this I could film the colours pattern and the focus ruler in same time in front of the camera.

Cropped images for focus testing:

RovoCam_focus

Here we can see a 200% zoom that will reveal the compression destruction between ProRes LT and ProRes proxy. Also we have less difference when recording in 50p vs 50i.

RovoCam_focus_zoom

 

 

 

360 Shooting

Over the past couple of weeks we have been testing out our 2 camera 360 video setup. Here at the EPFL we have an up and coming project that would make good use of 360 video, so we went out shooting some tests and here is some info on what we did and why.

So, primarily we wanted to try and record a short piece to camera with two of the professors that would be involved. Our idea was to record the two in two or three different locations and have the shots static. This way the viewer can look around the environment whilst listening to the speakers. Once the footage was recorded we had the idea of adding in some graphical illustration (images, text, animations) to the video to see what was possible.

So for the recording we used 2 Kodak 4K PixPro action cams mounted on our custom 360 base that we 3D printed (article coming soon). We also had some visiting people take part in the test recording as well and they brought with them a Samsung Gear 360 camera. It’s an interesting 360 video solution. Before we set off we all decided that we would keep the recording relatively simple and set the Kodak 360 rigs in the best location whilst the Gear 360 cam was a ‘roaming’ 360 cam that would be placed wherever looked interesting. We didn’t try and hide all the ‘behind the scenes stuff like crew and equipment. The reason being we didn’t want to waste filming time hiding everything as it was only a test and we were limited for time.

For shooting there wasn’t too much difference from normal shooting apart from the crew had to get used to the fact they were on camera as well. We setup the 360 cameras using wifi to connect to tablets and phones so we could see the framing. Set up the normal 2D camera, setup the mics and rolled.

Admittedly this was a very basic test as we wanted to try different sized environments as well as different types of lighting (daylight, low light, tungsten etc) and we wanted to see the footage and stitching with these different parameters.

Here are some photos from the shoot and there will be following articles on how we got on with the post-production.

360 videos, lets get testing.

IMG_2850So 360 videos are now hitting the mainstream with devices like Google cardboard, Samsung VR Gear, the Oculus and the Vive. This has caused quite a stir in the office as we discuss the pros & cons, ins & outs and benefits and set backs of this new medium. We have all been thinking about how 360 videos can be applied to e-learning and weather its worth the effort. Some say its a gimmick others say its here to stay. Either way here at the MOOC Factory at the EPFL we put our heads together and started thinking how we were going to test it out to see if it is something we want to offer for our course makers. Over the next few posts we will document some of the videos we made, how we made them and what we thought about the gear, software and processes involved in 360 video making.

*Warning* Server at critical level!

Ahhh, the age old problem of storing digital rushes on servers or hard drives is the bane of every production company and freelancer alike. Right now at the Centre for Digital Education (CEDE) at the EPFL we are starting to feel the pinch with our server space. We have known for a while now that our method of storing our rushes isn’t the most efficient. To help you understand our problem let me first explain briefly our setup here and how we work with our rushes.

Here at the EPFL we have three studios specifically designed for recording MOOCs. The concept behind these studies was to allow the professor, presenter or speaker to be as autonomous as possible. So we have a studio assistant set them up in the studio and do a short test recording to make sure everything is working properly, then after that they are on their own in the studio for their recording session. When they record we tell them to start the recording at the beginning of their session and just let the video record continuously, capturing mistakes pauses and all. Here lies the problem. At the end of their sessions we have three large video files (we record three video streams with audio simultaneously), often 25+ GB each, with lots of mistakes and pauses that we will never use within those files.

Our rushes policy up until now has been the classic “keep everything” approach. Obviously, this takes up loads of space on both our working server and archiving server. But we can’t afford to work this way anymore and now we have started to take action to fix the errors of our ways.

After many discussions in the office our studio technician, Gilles Raimond, found a lovely little program that seemed to be the answer to all our problems. ClipExporter.

The idea behind ClipExporter is to be able to quickly export clips from your Final Cut Pro X timeline.

You first make your rough cut.

Screen Shot 2016-06-28 at 15.41.18.png

Export a .fcpxml file.

Screen Shot 2016-06-28 at 15.58.27 copy

Then the program will make copies of the each individual clips used in the timeline referencing your original rushes.

Screen Shot 2016-06-28 at 15.42.28

So you end up with a folder full of short clips that you are going to use in your video.

The idea is that then you can take these clips and go into Aftereffects or Nuke to do further post production work. But what we do is then make a “light” version timeline of the same project and reconstruct the time using these clips.

Normal timeline

Screen Shot 2016-06-28 at 15.41.18

Light timeline

Screen Shot 2016-06-28 at 15.41.34

We can then delete the clips that are not used from our server and voila! From bloated, overweight rushes to space saving, efficient clips in no time at all.

We have been testing out ClipExporter for the past few weeks here at CEDE and we have been impressed with the results. We have had to change our editing workflow a little to incorporate the new software which now takes a little more time. And so far we have only been testing it on our more ‘simple’ editing projects where we don’t have a lot of video and audio layers or too many visual effects. But from the space saving results that we are seeing so far it is well worth this extra effort.

Screen Shot 2016-06-28 at 15.43.54 copy

Obviously, our case is a fairly extreme one in terms of the amount of space that we can save by using this software. But even if you save a small amount of space on each project pretty soon at all adds up.

We will add some updates further down the road to see how we are getting on with ClipExporter. But for now everyone in the office can sleep a little easier knowing that we aren’t going to have warning e-mails saying our servers are about to burst at the seams.

**Sigh of relief**