Justin Israel

Dec 192012
Justin At South Park

On my last day of work at South Park Studios, my co-worker Tory gave me a present. It’s a book that collects together a history of doodles and graphics, capturing significant moments during my 4 year run with the team.

Tory tends to represent our team as dog characters. I can be identified usually by the pierced chin. Much of this content refers to inside jokes, but I feel it also has a number of generally hilarious imagery.

Thank you, Tory!

Oct 122012

PyQt4 UI Development for Maya

Just released my 3rd python-based online training video through cmiVFX.com


This tutorial is about learning PyQt4 python bindings for the Qt Framework, and how to introduce new UI elements to Maya as a platform.
We discuss what comprises a “Framework” and a “GUI Framework”, and how Qt and PyQt4 work together.


Getting Started With PyQt4

There are multiple ways of getting a working installation of PyQt4, both for the general system and for Maya. We look into these approaches to get your system up and running to begin working with PyQt4!
We also talk about what is included, such as command line tools and applications, tips on how to test and learn the code, and how to structure a project.


PyQt4 Fundamentals

Lets get crackin’ and learn the basics!
• What is a QObject? What is a QWidget? Common PyQt4 classes are explained in detail
• Working with the Qt Designer application, to build a UI visually
• Layouts: Making widgets resize elegantly and stay organized in your design
• Coordinate space: How do widgets transform in your 2D screen space?
• QApplication and the Qt Event Loop: The engine that runs your UI
• Events, Signals, and Slots: How components communicate changes and how the application can respond to changes to make it dynamic


General Examples

With an understanding of the framework components, we can begin working with fully functional stand-alone examples.
• Common PyQt4 app template
• Subclassing Widgets: Creating custom functionality to the existing classes provided by PyQt4
• Dialogs: Raising dialog windows above existing windows, Modal vs Non-modal, and creating forms. We look at different ways to validate the data provided by the user, to these dialog forms.


PyQt4 And Maya Introduction

Finally, some Maya action! Maya has a slightly different approach to using PyQt4…
• How does the QApplication and event loop work?
• Common Maya PyQt4 app template
• Looking at the Maya API’s MQtUtil class
• The sip module: Helping us translate between Maya’s Qt and our own PyQt4 code


Replicating Maya’s UI Components

What better way to see examples of creating UI for Maya than to replicate some existing functionality? This gives us the opportunity expand with custom functionality
In this chapter we will take two different UI components in Maya, and do a basic custom version of our own, and show to how link them up to Maya’s own callbacks.
Some Features Of This Chapter Include
• The QTableWidget
• Model / View separation with QTreeView
• Docking windows into the Maya interface
• Mixing together PyQt4, the Maya API, Maya commands, and callbacks
• Sorting model data



A button can be a button, and a slider might look alright in its stock form, but sometimes we want to customize the look of our widgets. This chapter introduces multiple ways of achieving custom looks to our components
• Stylin’ Stylesheets: Use CSS-like syntax for applying style sheets to widgets
• Painting By … Paint events: For even more control, we can tell a widget exactly how to draw itself on the screen. We will look at two different examples of how to use custom painting.

Previous cmiVFX tutorials:

Jul 252012

In a recent python project where I was sending multiple messages per second of data over a basic socket, I had initially just grabbed the cPickle module to get the prototype proof-of-concept functioning properly. cPickle is awesome for easily serializing more complex python objects like custom classes, even though in my case I am only sending basic types.

My messages were dicts with some nested dicts, lists, floats, and string values. Roughly 500-1000 bytes. cPickle was doing just fine, but there came a point where I wanted to investigate the areas that could be tightened up. The first thing I realized was that I had forgotten to encode cPickle in the binary format (the default is ascii). That saved me quite a bit of time. But then I casually searched online to see if any json options might be better since my data is pretty primitive anyways.

I found UltraJSON, which is a pure C json parsing library for python, and ran some tests. There are benchmarks on the project page for ujson, as well as other articles on the internet, but I just wanted to post up my own results using a mixed type data container. ujson came out extremely fast: faster than binary cPickle and msgpack, in the encoding test. Although in the decoding test, msgpack appeared to be fastest, followed by binary cPickle, and then ujson coming in 3rd

This test included the following:

Here is my Python 2.7.2 test script using timeit for each encode and decode step.
Jun 212012

A recent project of mine involves research and development with an XBOX 360 Kinect Sensor. Being a python guy, I started searching for python bindings to some OSX-supported framework. When you just get started looking into this area it can be a little confusing. There are a number of layers to the software stack to enable one to accomplish anything meaningful. This is just a short and general blog post outlining the basics of what I have discovered thus far, to help anyone else that might also be getting started.

At the lowest level, you need a driver. Something that can talk to the USB device that is the Kinect sensor. When you purchase the XBOX Kinect for Windows version of the sensor, and you are going to be developing on windows, much of this whole stack is provided to you by way of the Kinect SDK. But for the open source folks with the standard XBOX 360 sensor, you need to piece together your own solution.

Two drivers that I have discovered thus far:

I had started OpenKinect (libfreenect) because it comes with a python wrapper included. There were a few dependencies (I will talk about specific build steps in just a moment), but once I got this installed I was able to fire up the included  glview app and see both depth and rgb data streaming in from my sensor. The role of these drivers is to provide simply the basic streams. That is, the depth, rgb, audio, and a few other sensor data streams. If your goal is to start tracking players, seeing skeletons, and registering gestures, the drivers are not enough. You would be required to make your own solution from this raw data at this phase in the game.

You would now want to look into middleware that can take the raw data and provide to you an API with higher level information. This would include finding users in the scene for you, tracking their body features, and giving you various events to watch for as the data streams.

Being that my goal was to have python bindings, I found my options to be much more limited than if I were going to be developing in C++. Wrappers have to exist for the framework you want. This is where my research really started ramping up. I spent a few days dealing wtih compiling issues, as well as having an actual bad power adapter that had to be exchanged. But all said and done, here is what I have settled on thus far…

  1. Driver: PrimeSense Sensor
  2. OpenNI Framework
  3. NITE middleware for OpenNI
  4. PyOpenNI python bindings

Install Details

Install homebrew (package manager)


Install build tools

Install python2.7

Suggestion: virtualenv Environment

This is not a requirement. But I recommend using virtualenv to set up an environment that specifically uses python2.7 so that you don’t have to fight with mixed dependencies and versions.

Create a virtualenv called “kinect”

Install libusb (patched version)

There is a special patched version of the libusb library, in the form of a homebrew formula.

Now copy platform/osx/homebrew/libusb-freenect.rb -> /usr/local/Library/Formula/

Install SensorKinect drivers

Then uncompress Bin/SensorKinect093-Bin-MacOSX-v*tar.bz2

Install OpenNI framework
  1. Go here: http://www.openni.org/Downloads/OpenNIModules.aspx
  2. Download Unstable Binary for MacOSX
  3. sudo ./install.sh
Install NITE middleware (for OpenNI)
  1. Go here: http://www.openni.org/Downloads/OpenNIModules.aspx
  2. Download Unstable MIDDLEWARE of NITE for OSX
  3. sudo ./install.sh
Install PyOpenNI

Be aware that on OSX, PyOpenNI requires a framework build of python 2.7+ and that you must build it for x86_64 specifically. Also, I was having major problems with cmake properly finding the python includes location. I had to suggest a fix, so please see here for the necessary corrections. I have referenced a patched fork of the repository below.

copy the lib/openni.so module to the python2.7 site-packages


Once you have everything installed, you can try out the examples that are included both in the NITE source location that you downloaded and also in the PyOpenNI source location:

  1. NITE/Samples
  2. PyOpenNI/examples
I also tried out ofxKinect (github.com/ofTheo/ofxKinect) on the side, which is an addon for  OpenFrameworks. This is kind of a separate path than the OpenNI stack. I would say its more like an advanced offering of libfreenect. Using the included example, I recorded a 3D point cloud that is built on the fly from the RGB and depth data:


Apr 142012

Permalink - writemycode.net

In the spirit of a very similar blog post, I decided to expand upon a specific area of that article…

Consider a question like this:

I want to do this. Create 10000 files, (filename can be just combination of time and random number). File size should be 4k. And I want to time this. say how many seconds it will take.

How can I do this on bash?

Thank you.

Obviously this person needs some assistance, and the question is very short and easy to understand. But the problem with this type of question is that there are really only two ways someone can approach the answer. Your options for an answer are either to just give the person the complete code snippet as they have requested, or fall back on a lengthy personalized tutorial.

Short questions that primarily request a complete code example as an answer are counter-productive to code communities.

Answer Option 1

As the topic of this article suggests, option 1 is “Write my code for me”. It might seem easy as a one-off situation to simply donate a working code snippet and get the person asking this question moving on their merry way, but really this is not helping them in the long run. They haven’t learned anything beyond the process of hitting a roadblock, then immediately going online to ask for a solution. Had this person included some references to what they have researched, and most importantly a code snippet representing what they have attempted, viewers of this question would have a basis for comment and potentially an answer pointing out where the person has gone wrong.

Answer Option 2

And in the other direction: providing a lengthy tutorial. We all want to help and teach, but this is just one of numerous questions floating out in the ether that requires a response. Can we really spare that much time for every single question like this to re-teach material that is most likely already documented in generalized contexts all across the internet? That would create quite a lot of redundant information simply because each person combines new components into a new question needing a new lesson. Really, every part of this question can be Googled quite easily. Why ask a community to do a new custom writeup for you?

Now, assuming we actually wanted to write a tutorial for this person asking the question. The problem at this point is where do we begin? The information provided doesn’t suggest that this person has a grasp on any part of the problem. So the tutorial answer might need to include:

  1. Bash and for-loops
  2. How to get the current time
  3. How to generate random numbers
  4. Creating files
  5. Populating new files to a specific size
  6. How to write a complete bash script, and time its execution.

Had they told us what they know how to do so far, and what aspect has them stuck, we could simply focus on one area and provide a good bit of knowledge to get them moving again. But right now this is just too much work to net a situation where they will learn something.

For those that are immediately inclined to provide the complete code snippet to solve the problem, where do we draw the line? What if the question being asked would need 10 lines of code? 20? 100? And if you are also interested in frequently helping people, would you be willing to provide 5 lines of complete code to 10 people a day, knowing that each person probably didn’t learn much? Furthermore, after having given this individual a quick answer, you have now rewarded their lazy behavior, and more than likely just encouraged them to repeat the bad habit again.

Through a conversation between my coworker and I, some interesting metaphors were raised that I simply can’t resist from sharing…

Vending-machine communities

Put a question in the slot and pop out a solution.  
Be it a traditional forum, an online discussion group, mailing list, or a trust and reputation based technical site like stackoverflow.com, these communities are driven by people, not machines. People have to take time to review content, and contribute their knowledge. We all work hard to acquire that knowledge, so lets all try and put some value on it in the form of the quality of our questions. A .25 cent answer is insulting. Treat your questions like they are costing you actual money that properly reflects the value of people’s knowledge. Code communities don’t work for you, and you don’t work for them. We are all here to help because we love it. Please don’t make us hate it, or feel like we are all just part of a big vending-machine.

Toilet paper answers

Answers that can be used only one particular time for one particular situation.
There is an insane amount of content on the internet. It’s hard enough sometimes to sift through the results of a vague Google search, let alone the content on our individual communities. When you ask a question that provides zero context, or proof of the extent of your current effort, then both the question and the answer are for the most part “throw aways”. If its going to be a persistent part of a community space, it should aim to benefit future support-seekers with similar situations. Referring to the example question above, someone going with Option 1 (“Write my code”) will end up providing an answer that will likely not help many people beyond this situation. Unless they too are looking for a way to do a for loop and create 10,000 4k dummy files and measure the execution time. The only way it would stand to benefit future visitors is if the answer did Option 2 and wrote a fully self-describing tutorial.


I can only speak for myself about what I might do. I consider myself to be the type that would go as far as to look at API docs for someone, and work out some pretty extensive examples on my local machine. I might even install libs that I don’t have or have never used before in an effort to provide assistance. But I need to be motivated to do so. It’s exciting for me to write out a page-length of information if I know it will help this person. But in a case like the above there is no show of effort and no context provided — just a person asking to have code written for them.

As I suggested already, we all want to help. Thats why we frequent these forums and sites. But we help these types of answer-seekers even more by withholding instant gratification.

And now… I direct you back to whathaveyoutried.com


Dec 052011

I had just received my brand new MacBook Pro a few days ago. Amazing machine. Probably the best laptop I have ever laid my hands upon. It was the early Feb 2011 model so I got a crazy good deal. But something caught my eye that I just had to investigate…

I’m a bit obsessive when it comes to small issues that I can’t resolve, and this is just still bothering me. I noticed that when I smooth scroll (trackpad or smooth wheel logitech mouse) on such content as webpages, or Mail, that there is a flicker in the display of text and context. It depends on the size and orientation of the content, whether it will flicker more or less. My eyes just couldn’t ignore it and I figured it couldn’t possibly be normal functionality. Thus, step one of my problem solving began: Google search.

My search turned up a number of similar complaints to both Macbook Pro and Air models, such as this discussion: https://discussions.apple.com/thread/2645516?start=0&tstart=0
It seemed the problem was not just limited to my specific early 2011 Macbook Pro. Suggestions ranged from resetting the PRAM (holding cmd+option+p+r for a couple of reboot cycles), to toggling the screen through display resolutions, to adjusting brightness of the display. Nothing seemed to make much of a difference to me.

Inversion (pixel-walk)

Googling also turned up a reference to this site, which offers various types of LCD tests: http://www.lagom.nl/lcd-test/inversion.php

According to this site, its normal to have a slight flicker in one box. And heavier flickering suggests voltage alignment issues in the LCD display. Using this test, I tried it on a number of Apple product configurations. Here is a collection of my findings:

Device Display Scrolling Not Scrolling
Macbook Pro (early 2011) 15" 1440×900 Heavy multi-color flickering in all boxes At least 2 boxes always lightly flickering
Mac Pro MacPro4,1 (2009) 24" Cinema Display Light amount of flickering in all boxes No flicker
Mac Pro MacPro5,1 (2010) 27" LED Cinema Display No flicker Very light flicker in box 7a
Inversion (pixel-walk) test results on LCD displays

Apple Support

After speaking to support over the phone, they suggested that I go into the store for more help. When I got to the store and started speaking with a tech at the Genius Bar, he had never heard of this issue before. But once I showed him an example on both news.google.com, and in my Apple Mail, he definitely acknowledged that its noticeable. He then went off into the back to research the issue a bit.
When the tech came back he said that he had found no outstanding information from Apple about this issue. A second tech even came and looked at the issue, and had no explanation for it.
I went around the store and checked web page scrolling on 13″ MacBook Airs, and 13″, 15″, and 17″ MacBook Pro models, with and without the higher resolution LCD display options. All models exhibited the same flicker during scrolling. My final recommendation from the stumped Apple tech was that it could be an issue with Lion and its rendering of fonts, or whatever, and that I could either return my laptop, or hold out for some kind of fix from Apple. Basically, no idea.

My question is… Am I being overly sensitive to this display flicker? I figure I can’t be the only one, as per the discussion lists of other users that notice the problem. Some of my friends with the same laptop said they have never really noticed until I pointed it out. I just wonder why such a fantastic laptop would exhibit this visual artifact, and whether it is something I should just accept as being normal?

Do you have this problem and is it noticeable? Post your feedback!

Are you affected at all by flicker when scrolling on a Macbook?

View Results

Loading ... Loading ...
Nov 202011

A question came up in the Maya-Python mailing list that I thought was a really good topic, and should be reposted.

Someone asked how you can create maya UI objects and embed them within your main PyQt application. Specifically he wanted to create a modelPanel and embed it so that he would have a camera view within his own PyQt window.

Here is my example of how to achieve this…

You need sip and the MQtUtil functions to convert between maya node paths and python Qbjects. Its the same idea as having to use those functions to get a reference to the maya MainWindow, in order to parent your dialog.

Nov 152011

Second video in the python for maya series, just released through cmiVFX!

Python For Maya – Volume 2

If you watched the first video, you now have a good grasp on Python. Sweet. Let’s plow through some more involved concepts like python juggernauts!

With a working knowledge of the python scripting language, and the Maya Python commands API, we can continue to learn new ways to solve more challenging problems, create complete scripts, and build user interfaces around our tools. We also introduce the Maya Python API; a lower-level interface into Maya.

This video focuses more on breaking down full scripts, as opposed to typing out syntax. Its jam packaged with information and moves fast to deliver you as much brain food as possible. The first segment of the video transitions from beginning to intermediate level, with the majority of the video being intermediate, and finishing out by touching on advanced concepts. The included project files are abundant, complete, and full of helpful documentation so that you can take your time and learn about each piece of the tools.

If you check it out, leave me feedback!

First video can be found here

Nov 092011

This is a follow up post to my previous one on Installing PyQt4 for Maya 2011

Recently while putting together my next video tutorial for Python for Maya, I came to a section where I wanted to demo PyQt4 in Maya2012. But I was concerned that viewers would have to go through the complicated steps of building PyQt4. I noticed that other people have made available precompiled PyQt installers for windows (here) but I could not find any for OSX or linux. So I decided to put together a build.

I created a new project on github called MyQt4

Its a Makefile for completely downloading and building PyQt4 for maya, and generating a .pkg installer. Hopefully someone can contribute improvements since I dont have a ton of experience writing makefiles, and also that someone might create a linux version.

Here is a link to the latest pkg build:

Snow Leopard: 


Mountain Lion:

Here are builds other people have made:

Oct 082011

Just released my first online video tutorial, through cmiVFX

Python Introduction Vol 01 – Maya

Amazing at Animation? Master of Modeling? Conquistador of Character Rigging?

But how is your Python?

This course brings the talented artist into the fold of the technical-side of Maya. Learn the basics of Python, and its place in your 3D workflow, with visual examples and real world problems. Get a kick-start on adding some automation into your life, and solving common problems in a fraction of the time. By the end of this video, you should have a deeper understanding of one of the languages Maya speaks under the hood, and how to start viewing your scenes in terms of glorious Python code!

Check it out: http://cmivfx.com/store/320-Python+Introduction+Vol+01+-+Maya

If you check out this course, please leave me some feedback! I would love to hear your thoughts.
Stay tuned for more installments to come!