We just completed our third session of M101P, MongoDB for Developers. We had 1063 students pass the course — a completion rate of 19%.
6% achieved distinction with a grade of 95% or better.
Congratulations to everyone who finished! We will be issuing certificates of completion in the coming weeks.
Our next session of M101P will begin June 17, 2013. Registration is open now. Start dates for other courses are as follows:
The courses are free. They are seven weeks long and require 4 to 10 hours per week, depending on your experience level. Register at education.10gen.com.
We had 1405 people pass M101P, MongoDB for Developers for the course that started on Jan 21, 2013. That was a completion rate of 21.2%. 6.07% Achieved a score of 95 or better.
We had 1701 people pass M102, MongoDB for DBAs for the course that started on Jan 21, 2013. That was a completion rate of 26.41%. 10.98% achieved a score of 95 or better.
We will be issuing certificates of completion within the coming weeks and sending out a survey. I also need to post the fall survey results one of these days. We plan to share the results, just have not had time to write the long post to go along with them.
Congratulations to everyone who finished. Look for our email with a survey.
All three classes are open for registration right now for the next run:
Register for free at education.10gen.com. The classes are seven weeks along and require between 4 and 10 hours per week, depending on your experience level.
I am pleased to announced that Jeff Yemin, the lead maintainer of the MongoDB Java driver, will be co-teaching M101J, MongoDB for Java Developers with me, starting on February 25th. M101J is a free seven week online course aimed at teaching Java Developers everything that they need to know to get started building applications backed by MongoDB.
Jeff Yemin has been programming with Java for over 15 years, since landing a job at Sun Microsystems as a Java consultant and educator. His first experience with MongoDB came while at MTV Networks, where he led a project to create a unified Java-based content management system (CMS) with MongoDB as the underlying data repository.
Jeff will add significantly to the depth of knowledge available to students in the course and help bring the course in line with best practices for Java developers looking to leverage MongoDB.
Not only does online education lower the cost of education, but by increasing the reach, it helps justify using the best instructors. Jeff has significant experience as an instructor, but has not taught while at 10gen because it was hard to justify his investment of time teaching small groups versus continuing to improve the Java Driver. But with our online course already having nearly 7000 registrations, the leverage and value of his participation is clear.
Sign up today for M101J. Classes begin on February 25th. Students who achieve 65% or better in the class will receive a certificate of completion from 10gen.
There are two sets of costs to running online classes: the capital cost of buying the equipment and the variable cost of the labor. In this analysis, I am going to look at first year cash costs. The capital equipment can be amortized over multiple years in a true accounting analysis.
To be able to teach a single class online, you need a recording studio, the equipment for standing news reporter shots and the equipment to edit video. The total cost as configured for us was about $13k for this equipment. Here is how the cost broken down:
Note that the video editing station specified is good enough to edit more than one class. You could probably teach 5-6 new classes per year using this setup, sequentially.
We own three recording studio setups. We recorded two classes simultaneously and kept two in the office and one in my home (I live two hours from the office).
In addition, we employ a full time video editor. Our video editor estimates that it took him close to 15 hours to edit each hour of finished video. Our courses were about 14 hours of total edited lesson material, so that’s 210 hours per class. That’s 5.25 weeks of work at 40 hours per week. So each editor can probably edit about 7 classes per year.
I arrived in May of 2012 and we had our first classes completed by December. Our engineer arrived a bit later, but i would say approximately one man of labor year went into creating our first two classes. That cost dwarfs the capital equipment.
We also pay approximately $1700 per month to host the classes on Amazon Web Services ($20,400/year).
We also had all video captioned at 3play media. Putting all that together, and assuming that we it takes two full time people on the online ed effort to produce the first three classes, we get the following annual costs.
Note that I dropped in $100k for the fully burdened cost of a head. That is not our number at 10gen. I just put in as a placeholder to be concrete. On our team we have more than two heads, but some folks are working on more than just online education.
Although the startup costs are not insignificant at $250k, the incremental cost of adding one student or running the classes again is very low. Even considering the high startup costs, we will register at least 50,000 people in the first year, so the cost is under $5/student.
The economics of online education are amazingly good. There is at least an order of magnitude improvement over the costs of teaching in person.
We use what I will call news reporter videos to advertise classes and at the beginning of each week of class. These segments look like this and are mid-shot videos of a single person, typically, talking directly to the camera.
We use these as promos for upcoming classes. The one above is to advertise the M102, MongoDB for DBAs class that is starting on January 21. We also use them to introduce each week of new material to put a face to a voice.
As most of our teaching segments are composited overhead shots of the Wacom tablet, these shots represent the only time the students see the teacher.
For recording the video, we use our Canon XA 10 Video Camera. It has XLR audio inputs. We have two Sony UWP-V1 wireless microphones systems. We use the balanced audio cable to go from the receiver to camera. We own two microphone systems in case we want to have two people talking on camera. That’s useful if there are two instructors or if you are conducting an interview.
With only one microphone running, we get sound only on one channel (left or right). We clone the audio onto the second channel during post processing.
The wireless body mics are pretty good. They don’t isolate the speaker as well as the over-the-ear countryman mics, but they are isolating enough that we can shoot in a crowded office.
We wind up tweaking the gain levels on the mics a bit to get the right volume. You don’t want the mic to clip. It’s easy enough to make a soft signal louder and you don’t lose much fidelity doing so. But if your microphone signal is clipped, information is entirely lost and the clipped parts can’t be recovered. Audio that is clipped sounds bad.
Lighting is always tricky with humans. Although we have a small LED lighting system, the problem with it is that the subject has to be really close to the lights to get even illumination. We probably need bigger lights. Instead, we have moved to shooting these type of shots with a window behind us and using natural light.
We are also never quite sure what the backdrop should be for these shots. We have settled on using a shot looking down our NY office. We have also purchased some solid colored cloth backdrops but were ultimately not happy with the look.
We do sometimes write scripts for these shots. We don’t read from the scripts (we own no teleprompter) but writing them out often helps us decide what we want to say. Note that we don’t write out scripts for lesson videos at all.
These also tend to be some of the more stressful videos to record. Maybe not surprisingly, it’s harder to look right into a camera and speak versus work at a tablet while being recorded. And we record these out in the middle of the room so people are acutely aware of how many takes we do (sometimes more than five - they usually decline in quality from the first onward).
You also get more comfortable being on camera the more you do it, and ultimately the first video you record is the promo video for the course. If timing permits, I would recommend re-recording it after you have finished recording the course.
We used a Google Docs Spreadsheet as the main tool for organizing the curriculum. The spreadsheet is shared between myself, our TAs, our video editor and our software engineer. We started at a coarse level and then slowly refined the plans to the point where we could sit down and record segments.
Our courses are seven weeks long. Each week delivers about two hours of lecture material with a homework assignment at the end of the week. Our lessons are broken (in M101) into small 3-5 minute segments, each one with the following structure:
When students watch the class, they see a teaching segment that includes the intro to the quiz. The intro to the quiz shows the instructor gesturing and pointing at the actual quiz text, nicely typeset. Here is an example from the course:
After they watch the video segment, they are presented with the quiz within the course system on the web. They can try a few times on the quiz and then we show them the answer. Quiz questions don’t count toward their grade. They exist just to reinforce learning.
After they take the quiz, they can optionally watch a short video segment, often less than 30 seconds, that explains the answer.
The edX system has quiz rendering built in, so you might wonder how we can show a quiz on camera before we have configured edX to show our lessons.
To achieve this, we built our own course builder module that allows us to specify the quiz and then renders the quiz using the edX quiz rendering code. This allows us to preview the quiz as it will eventually appear before fully configuring edX to display the lesson.
From a video editing and production standpoint, we record a single video segment with the lesson, the quiz lead-in, a pause, and then the quiz answer. Jerzy, our video editor (previous post) will then produce that content into two segments. The first contains the lesson and the quiz lead-in. The second contains the quiz answer.
We configure edX to show the first segment, then truly present the quiz, and then show the quiz answer. Those two videos above are examples.
From the instructor standpoint, it’s a very linear process. I specify the learning goal, and make up a quiz that tests the goal. I will add those to the google docs spreadsheet, one row for each lesson. The columns for each lesson are:
I will sit down and design the whole week this way. When I enter the video recording studio, I will then
At the end of each segment, Screenflow pops up and wants me to name the segment. I will typically give it a logical name that makes sense to me (m101_fall2012_week5_lesson3) and save that file in a dropbox directory that is shared between me and our video editor. I add the name to the Google Docs spreadsheet.
The overhead camera file is on the SD card in the camera. If I am in the office, we leave it to our video editor to retrieve this file. If I am working from home, I copy it to our shared dropbox directory.
We found a Dropbox (paid account) to be a pretty effective way of moving files around. The Screenflow files are small, but the overhead video files are enormous. From my home camera, they were often over a gigabyte for an unedited eight minute segment. I also bought the Drobox packrat feature so that I can always get back a file that is accidentally deleted, and all earlier versions.
I fill in the Google Docs spreadsheet columns with the names of the files and any comments to our video editor and then went on to the next segment.
Our video editor, Jerzy Fischer, would edit and composite the videos and add YouTube URLS to the Google Docs Spreadsheet.
At this point, we repeat the process for every lesson for the week. When we are all done, the YouTube URLs are copied manually to the coursebuilder, where we designed and previewed the quizzes, and we can deploy into the edX stack. We have at tool that builds the edX configuration files from our coursebuilder representation, which is in MongoDB.
Homeworks are a little different. Those I developed mostly from the command line, since many needed programming, and I stored those in github. Each homework assignment did have a quiz question though, since that was our only means of validating anything. The quiz question might ask for a validation code or the value out of the database.
We sometimes also recorded lead-ins for the homeworks and we typically did provide video answers for the homework assignments after they were due.
Homeworks often involved the students downloading some files. I typically created tarballs and zipped versions on my computer, checked them into Github and emailed them to our engineer who included them in the week’s announcements or the text introducing a homework question. Sometimes I would put those files in dropbox too and use dropbox’s ability to share a link to a file.
As you can see from the process above, designing good online classes is a significant amount of work and planning, like many things worth doing in the world.
If you have a different approach to curriculum development, I would encourage you to share it in the comments. Not enough folks are willing to share the nitty gritty details of how to design and produce an online class.
This is a guest post by Jerzy Fischer, our online content producer. He edited all the video for M101 and M102. The quality of the classes is due in no small part to his efforts.
It all begins with the raw content produced by the instructors. I am given two assets for each video segment. One is the video from the overhead camcorder, the other is a Screenflow bundle. Screenflow is the application we decided on for screen capturing but we could have just as easily used Camtasia or any of the other screen capture software out there.
The first thing to realize is that camcorders record video at 29.97 frames per second (or 59.94). Even if it is set on the camera to record 30p, you’re still going to get 29.97 fps. Screenflow by default records at exactly 30 fps, so the compositing gets out of sync to a noticeable extent about every minute and a half or so. So to get around this I have to tweak the export setting in Screenflow.
I go to file->export (you could also use ⌘E)->customize->video settings and change the frame rate to 29.97.
After exporting with these settings, I import the Screenflow feed and the camera feed into Final Cut Pro. I use Final Cut Pro X on a Mac Pro, which was frustrating at first coming from Final Cut 7, but after I got used to the change I found it actually quite useful for this particular project.
Now that I have both feeds with the same frame rates in Final Cut, I composite the feed from Screenflow on top of the feed from the camera. The camera feed is going to be the bottom feed, so I start with that.
Because of the copy stand configuration, the video footage comes out looking something like this (did you realize in earlier posts that the video camera was upside down?).
I rotate the overhead shot by 180° and zoom appropriately. I delete the audio because the audio we get from Screenflow using an M-Audio box and Countryman over-the-ear ear mic is much crisper.
I have to get the footage from the camera to be as perfect as possible because this is my bottom feed, meaning everything you see in the finished video except the writing comes from this feed. This requires color correction, resizing, and if the camera added any fisheye or barrel distortion or wasn’t 100% aligned I transform it to be perfectly aligned.
I also try to get the writing or text on the screen to be as light as possible so it won’t be visible underneath the Screenflow feed.
After I tweak the camera feed to my liking, I add the Screenflow feed on top of it in the timeline and detach the audio from the video so I can edit them separately:
Final Cut works like any other layer-based application, so layers on top take precedence, meaning with this configuration, all that you see is the Screenflow footage. To solve this problem, I use a keying affect, which basically tells Final Cut to look for all the white in the Screenflow feed and make it transparent, so you can see the footage beneath it. I use Final Cut X’s luma keyer which I find performs best.
By using keying and inverting it and playing with the rolloff and matte settings I can do a pretty good job of keying out all the background white leaving me with just the writing or text. Here are my settings:
Now that I have my bottom feed (the footage from the camera) and my top feed (the footage from Screenflow minus the background), it is just a matter of resizing and aligning the top feed to completely cover the writing and text on the bottom feed (not a small feat but with some patience and a bit of luck I get it to work).
I also have to sync the two feeds visually, so I go frame by frame on both feeds and find an easy place to sync the feeds, like a window opening or a new canvas popping up. I put a marker on each feed and line up the markers.
After I have the feeds composited, I use the New Compound Clip feature in Final Cut so I have 1 video file to edit in the timeline:
This helps me not have to make 2 cuts every time I need to edit the video.
At this point I can begin the editing process. Since these are online classes and not in-person lectures, there is an opportunity to remove mistakes, and fix pacing. Editing makes everyone better.
Any “ums” and other accidental sounds are cut out (unless I am very pressed for time). Also, since you can talk much more quickly than you can write or type, a lot of video needs to be cut out to keep the pace steady. I have to strike a balance though since I do not want to over-produce the videos and make them feel artificial.
The instructors speak to me directly on camera when they make a mistake or go in a different direction than they wanted to and want me to cut it out. To my dismay I sometimes hear these directions after I have already spent more than a bit of time editing portions that they want cut out! But that’s the name of the game with editing videos for online classes. The alternative would be to listen to the entire segment straight through before I begin editing, which was not practical given the deadlines.
When the videos are completed, I upload the finished product to YouTube and add the URL to our course builder (we will talk about that in another post).
Because of everything that needs to be cut, the ratio of length of raw footage to length of edited footage is about 3:1. The ratio of editing time to finished video is about 15:1. I will let you do the math to figure out the effort involved to edit about 16 hours of online video per class for two classes over a period of about about eight weeks.
This ratios improved toward the end as we tweaked our recording processes over the weeks and gained experience as instructors and editors. But overall it was a tremendous effort on my part to get through the the material for two classes that were being taught pretty much at the same pace as they were delivered to the students.
In the Fall of 2012 we rarely were more than one week ahead of students and sometimes were only a day head. In fact, the english captions from 3playmedia often trailed the publication of the video by 24-48 hours because only hours elapsed between my completing the editing and the students seeing the video.
When the class ended, it was impossible not to feel a sense of palpable relief. It really is like putting on a performance. Nevertheless, I am looking forward to starting the process over again for M101J in February! The student response to the class was overwhelmingly positive and it is fun and rewarding to be part of the show.
I subscribe to Salman Khan’s approach to online teaching, which is to view it as a one on one tutoring session.
Writing material out has two benefits. The first is that it paces you. You can’t throw up a tremendous amount of material at once. The second is that it limits you in the total amount of information you can present per square inch.
We use a Wacom tablet and Sketchbook Pro to create a whiteboard and the recording station that I went over yesterday, to record hand movements as we write. We then composite the whiteboard stream with the hand movements to create a final video for each lesson. A later post will talk about the video editing required.
These instructions are for the Wacom Cintiq 12 WX pen-based tablets.
A Wacom tablet is a secondary display for your computer with an integrated writing surface that is sensitive to the included pen (stylus). The computer sees the tablet as a display and driver software on the computer integrates the pen as a type of pointing device.
Setting up Your Wacom on a PC
The Wacom tablet comes with a small interface box with a DVI video output and a USB output. Plug the USB cable into a free USB port on the computer.
There are two included video cables with the Wacom, a DVI-to-VGA cable and a DVI-to-DVI cable. If you have a DVI port on your PC, that’s the preferred connection. The DVI output will provide better calibration, less flicker and tighter alignment of the pixels and screen output.
There is a switch on the interface box that must be manually set to DVI or VGA. Make sure it is set correctly for the cable you are using.
At this point, you are ready to turn on the display and install the driver software. The driver software comes on a CD in the Cintiq box or you can download it from Wacom’s website.
Once you touch the pen to the tablet, it should offer to let you calibrate the tablet. If not, go into Start->Programs->Wacom->Wacom Tablet Properties and calibrate your tablet.
Setting up the Wacom on a Mac
Plug the USB cable from the interface box into the USB port of your mac.
There are two included video cables with the Wacom, a DVI-to-VGA cable and a DVI-to-DVI cable. On a mac with a mini-display port, you will need a converter cable. The mac supports converting its minidisplay port to either VGA or DVI, but DVI works better. The DVI output will provide better calibration, less flicker and tighter alignment of the pixels and screen output.
Here is the connector you need for a Mac.
There is a switch on the interface box that must be manually set to DVI or VGA. Make sure it is set correctly for the cable you are using. See the PC section above for a photo of the switch.
At this point, you are ready to turn on the display and install the driver software. The driver software comes on a CD in the Cintiq box. But if you don’t have that, you can also download the driver from Wacom’s website.
Once you touch the pen to the tablet, it should offer to let you calibrate the tablet. If not, go into Apple Menu->System Preferences->Wacom->Calibrate and calibrate your tablet.
Using the Wacom as a Whiteboard
To use the Wacom as a whiteboard, you need a good pen-aware drawing program. Sketchbook Pro from Autocad is excellent and I recommend it. It’s works the same on PC and Mac.
Once you install Sketchbook Pro, you are going to want to customize it to make it easier to use. We will make the following customizations:
Customizing the Default Pen Menu
The default pen menus is the menu that comes up when you hover above the tablet with the pen and push the bottom pen button.
With Sketchbook Pro running, open the preference pane. On a Mac, this is under the Sketchbook Pro Menu->Preferences. On a PC, this is under Edit->Preferences.
Choose Lagoon at the top. Now customize the features for the default menu as shown below. This involves choosing each of the items in the wheel and then choosing an appropriate feature. Starting at 12 o’clock, set the items to:
Customize the Size of New Canvases
We are going to customize the size of the new canvas to be the full width of the Wacom but three screens long. This will let you use the scroll tool to scroll down the page and get more paper without losing your existing work.
Open up the preferences pane and makes the following changes to the default canvas> note that the default width of the Wacom is 1024 on the 12 inch version and 1280 on the 22inch version.
Customize the Pen Line Thickness
The pen tool works best. You will need to choose the pen and change the default line thickness and opacity. Click on the icon for the Pencil in the lower left hand corner of the Lagoon.
That should bring up this menu:
Now choose the ball point pen and click on brush properties. That should bring up this view:
Change the line width size to 2.5 and the ink opacity to 90%.
Using Sketchbook Pro for drawing
You should now be ready to use the Wacom as a whiteboard for teaching. Refer to yesterday’s post on setting up the rest of your recording station.
Recording the video lessons is at the heart of delivering a good online education experience. Our videos look like this.
We show hand movements composited on top of writing or terminal windows. I like this style because I think it is more engaging for students. Even when the instructor stops writing, gestures keep the student engaged. As humans, we are tuned to watch hands.
Stanford’s SITN has been teaching via TV since 1969. When I was there in the 90s, professors would sit at a table with an overhead camera looking down at a paper tablet. Professors would write out ideas and equations using a Sharpie.
The problem with using an electronic tablet is that you lose being able to see the hand if you just capture the tablet video stream. The solution is to composite two video streams.
We use a Wacom tablet positioned in a copy stand with a video camera positioned to record the hand writing on the tablet. In yesterday’s post I have a photo of the setup. The copy stand is a Bencher 910-60. Note that our host computer for the Wacom is a 27inch iMac.
In the setup shown, we are using a predecessor to the currently shipping Wacom Cintiq 22 inch pen tablet. The benefit of this large Wacom tablet is that it tilts forward and makes writing comfortable. It’s also a bit brighter.
But the 22 inch tablet is expensive (we had one prior to online Ed) and you can also use the most more reasonably priced Wacom Cintiq 12WX. We have three setups currently and two used the 12 inch version.
There are two video streams that we composite. One comes from the Wacom tablet, the other from an overhead video camera. We use Screenflow to capture the video appearing on the Wacom tablet. You can buy it directly or get it via the Apple App store on the mac. I prefer the App store because it allows me to install it on the computers that I use.
Because you want the final video to be legible when displayed at web resolutions of around 700x400, we mark off, using painters tape, a space of approximately 1024 by 585 on the tablet. Make sure the aspect ratio of your marked off space is 16x9, like HD video.
We have tried several video cameras to capture the overhead shot of the hand. At home (yes, I have a setup at home at this point), I use a Canon 5D Mark III with a 35MM lens, which I personally own. That takes brilliant video, but is very expensive. The video camera I would recommend is the Canon XA 10HD. It has good manual controls and takes XLR audio.
Don’t forget to buy a handful of large capacity SD cards. We buy 64GB cards and have nearly 10 of them. There is nothing more frustrating than sitting down to record and finding that there is no card in the camera!
You will need a ball head to align the camera properly. We bought some inexpensive Oben BA-2 ball heads that I would not recommend. They flex too much, vibrate too much and are difficult to adjust. I recommend the Arca Swiss Z1, a predecessor of which I use at home (again, a personal item from my photography habit). You might need a mounting plate and I recommend the Arca Swiss universal plate.
Note that the Arca-Swiss is threaded to attach to a 3/8 inch screw and the Bencher copy stand has a 1/4 inch screw. You can get a reducing bushing to fix this.
Ok, now you are almost ready to shoot. To balance the light of the copy stand with the light of the Wacom display, we found it best to reduce the intensity of the fluorescent lights on the stand by covering them with a Rosco neutral density filter. We bought one sheet and cut it in half, attaching with painters tape to the lights.
Alignment is critical for later compositing. It’s a huge hassle for the video editor to have to remove any parallax error from the overhead shot. It’s a bit easier to align when use the 12 inch Cintiq because your tablet is flat on the tablet and you can use the copy stand alignment grid to align the camera in dead center.
We aligned the 12 inch Wacoms in the center of the copystand because that was easiest, but it does make it hard to reach as you write. I found that I often would stand to write on the tablet. The 22 inch Wacom allows you to sit more comfortably.
You should use manual controls on the camera to force an exposure that exposes the hand correctly. With the neutral density filter, we found that exposure tends to overexpose the tablet itself slightly, which makes for easier compositing and whiter backgrounds. You should also adjust the white balance so the wacom appears white, while not causing the hands to be too strangely colored. The light bulbs in the copy stand are fluorescent like the backlighting of the Wacom, so this turns out to be pretty easy to do.
On my Canon Mark III, I wound up using ISO 4000, f11, and 1/40th of a second. You want to be fairly stopped down (bigger f-stop, smaller lens opening) to get adequate depth of field. Otherwise, the text will be blurry or the hand wil be blurry. I used custom white balance on my Canon Mark III.
We used Autocad’s Sketchbook Pro for whiteboard work on the Wacom. I will go over the settings we used in that program in a separate post. Once again, I recommend you buy it from Apple’s Mac App store. Don’t use Sketchbook Express, which is bundled with the Wacom tablets. It lacks certain customization features that are useful.
Good audio is critically important to students. It will make you easier to understand and it will make your caption transcriptions more accurate. There is no reason not to have amazing audio.
We use an over the ear Countryman E6 Flex Microphone with cardioid pickup pattern and headroom for general speaking, wired for XLR. There are many variants of the E6. This one is tuned for general speech, is directional, adjust to multiple people (the flex) and has an XLR balanced audio output (pro audio standard).
The E6 isolates your voice very well. As you can see from our photo in the previous post, we do have some acoustic insulation on the walls, but even if there are people talking nearby, the E6 will not pick them up much. From an acoustics standpoint, you just want a room that is acoustically dead without many reflections. Any room full of stuff will work pretty well. Carpet on the floor helps too.
To attach the E6 to your computer, you will need something that converts XLR to USB. We use a M-Audio Mobile Pre.
We record with the lights out in the room. Otherwise, we get specular highlights on the Wacom from the overhead lights. There is a checklist of things that the instructor must do taped to the copystand. It’s easy to forget to turn off the lights. You won’t be sitting in darkness. The lights from the copystand are pretty bright, even behind the neutral density filters.
Now you are ready to record. Start your camera and then start Screenflow. Make sure that Screenflow is set to pickup the Wacom, the entire screen. We start the camera before Screenflow so we can capture the countdown produced by Screenflow. This makes it easier for our video editor to sync up the shots.
I will talk more about planning lessons, recording quizzes, and the workflow between our instructors and our video editor in a later post. Our video editor will also be writing a post on compositing the video in Final Cut pro.
Now that the first run of the courses is over and we are officially in intersession over here in the 10gen education department (new courses startup on January 21), I have the time to talk about how we created the classes.
This will be the first in series of blog posts that talk about production methods. I hope that others will benefit and create their own classes. The classes were well received and the technique is very scalable. Online education will change the world.
The classes are built on the edX platform. 10gen runs an instance of the edX software on our own servers. There were two classes, M101, MongoDB for Developers, and M102, MongoDB for DBAs. I taught M101. Dwight Merriman taught M102.
We entered into a collaboration with edX to use the edX software. As part of that agreement, we contribute back any improvements we make to the software. I thank the good folks at edX, including Anant Agarwal and Rob Rubin for working with us.
Our classes were seven weeks long and designed to mimic college courses. Each week we delivered about two hours of lecture material along with quizzes and homework.
Although we are using the edX platform, our class flow is most similar to that of Udacity, a company whose techniques I have admired from the beginning. I took the first AI course and machine learning classes when they were offered at Stanford and then enrolled my daughter in Dave Evan’s CS101 class at Udacity when it launched. I also went through most of Steve Huffman’s CS253 course (web programming).
What I strived for was short lecture segments between 2-5 minutes long, each one designed to achieve a goal that was posed as “at the end of this segment, students should know X.” To test whether we had achieved our learning goal we placed a short quiz after the segment. I credit my wife Bari Erlichson for explaining the basics of curriculum planning to me.
I like the Udacity technique of having the instructor introduce the quiz on camera and go over the answer, so we used a variation of that.
The lectures segments were all recorded using a Wacom tablet and an overhead camera to also record hand movements. I will discuss the technique we used to record the video segments in a separate post.
We used a fairly simple quizzing engine for the initial run of the courses. We could handle only multiple choice questions, check-all-that-apply and fill-in-the-blank questions.
Homework was due weekly. Homework answers were validated using our quizzing engine. When the students needed to work on programming assignments on their local computer, we gave them validation scripts that would check their work.
The final exam was also administered through the quizzing engine but did require students to also write programs locally on their computer.
M101 adhered more strictly to the short-video-segment technique. M102, taught by Dwight Merriman, also included some longer segments.
In terms of staff, we had one full time engineer, one part-time engineer (full-time engineer working part time on 10gen education), one video editor and the partial help of the person who runs our in-person training. And of course Dwight and I worked on the courses.
All videos were hosted on YouTube and had captions that were transcribed in English by 3playmedia. The video player was embedded into the edX stack.
Students primarily interacted with each other in the forums, which were built into edX. The forum software was probably the least useable part of the system. I wish it were much more like Stack Exchange.
Overall, I would estimate that the course took me at least 20-30 hours per week. However, I was learning much of the material for the first time. An experienced instructor who has taught a course offline before would take less time.
The experience of creating the classes was pretty intense. Dwight and I both logged long hours and late nights recording material, designing quizzes and writing programming assignments. The engineers worked crazy hours to perform the customizations I asked for and our video editor saw the sun rise at the office multiple times.
We strived to deliver new material on Mondays. I would like to say that the whole course was developed and in the can on the first day, but the reality is that we often were working right up to the deadline we set for ourselves each week.
Hurricane Sandy hit in the middle when we were already slipping and the net result was that we slipped the courses a whole week in the middle. They ran eight calendar weeks.
I will be blogging for the next few days on a bunch of topics related to the course..