Why isn't belnder used in production? A UX design issue?

Blender 3D has been around for a good 20 ish years, yet we don't see it in production. For a fully functional 3D software, with new functionality and improvements being constantly added. Which gets updates faster than Maya, 3DSMax, or other industry standards. One has to wonder why? - Why has it not been more widely implemented in the industry?

One answer, could be. Like alot of open source software. What is emphasized is features. Over the key factor of user experience (UX). User experience design. User centered design places the user, at the heart of each design experience. Or interaction. My intention with this piece, is to provide constructive feedback that can be acted upon by the folks at Blender. As a 3D artist, I'd like to be able to switch to Blender completely, and would be delighted to see the industry embrace it. Because it makes/has made 3d accessible to so many peopel regardless of their location, monetary restrictions etc.

I began learning Blender about a year ago, after having been a Maya/Max/ZBrush user for over 15 years. I wanted to see what the fuss was all about. And I came into a very steep learning curve. I had started with Maya, and then learned 3DSMax, about 10 years ago, and I have to say the learning curve was less steep. Both are owned by Autodesk, but 3DSMax, was created by Discreet. So still a different user interface, and task flows than Maya.

So why does Blender not get used? Here are a few points from User experience design/Human computer interaction that may provide some possible insights, they also tie in to basic user psychology:

Based on my experience with Blender I found 3 key areas related to User experence design that could be the issue:

  1. Design that doesn't account for Familiarity, Memory and Flow
  2. A lack of affordances - Not accounting for user agency
  3. Some basic UI design issues
  4. Wider UX design issues

Familiarity, Memory and Flow

I'm writing in the context of Blender3D having one of their goals as getting 3D users from other software to switch to Blender. If this is not one of their goals then this factor is not a problem that needs addressing.

I think most of us fall in love with 3D because the creation aspect, leads one into a state of flow. As described by Mihály Csíkszentmihályi. It's that state where time becomes suspended, and the challenge between cognitive challenge and tacit knowledge reaches a state of balance. So the user is being chellenged but not overwhelmed.

And on the flipside of flow. We all have to deal with crazy deadlines. If you're working on a project, with a Monday deadline, which was recieved on Friday afternoon. You look for software which is predictable, reliable, and one where a bug will not leave you searching for a solution for 6 hours..

Unfortunately, on one of my first attempts at using blender in a production scenario, I was left with a model that had stray vertices, when imported into unreal engine.. So I chose to abandon it and use 3DSMax, which gave more predicatble results in the interest of being able to complete the task, with the least stress possible.

In terms of HCI, or user experience design, here's what happens with users:

 HCI (Human Computer Interaction) theory, tells us, that the efficiency with which users complete tasks is based on the level of cognitive load or energy required to complete the task. There is a balance between the level of challenge, and the level of effort required. Which either drives a user forward, into a state of flow. Or leads them to abandon a task out of frustration. The psychology also suggests that when users abandon, a task, it affects their mood and feelings, and even perceptions, about other tasks they may need to undertake while working with the software/app. There's a phenomenon called the Halo effect. Where a person notices one good aspect of something and assumes other aspects are good too. Without completely examining the other aspects.
In the context here. One bad task, will colour their perception of the app, as unusable, even if other task flows are less challenging or even familiar.

So, in the context I've described above with my first use. I might have had those extra vertices in my model, due to user error (my fault), or, the architecture and the modeling flow (user task flow) that produced the model, in Blender differed from most commercial software, and maybe required a different task flow to achieve the same result. If it was the latter. Then that enforces the point that has to do with cognitive load, memory and familairity. If the experience had yielded results similar to what I have had with commercial software, I might not have abandoned the task. Or maybe it was just a bug that needed working out.

Affordances control and user agency

Affordances, refer to icons/buttons, that allow users to "take actions". User agency, refers to the sense of control a user has, which leads to a sense of predictability, certainty, even reliability and trust one places in a piece of software. The more the user feels a sense of control, the better, their user experience, and their sense of achievement. Which feeds directly into their motivation to learn more. And eventually enter into a state of "Flow" where learned knowledge becomes "tacit" knowledge faster.

As an example let's compare the opening screen of: Maya, Houdini (Which I've recently begun learning) and ofcourse Blender.

Comparing the Blender and Maya opening screens

The Maya and Blender UIs

The first time, I opened blender, I was pleasantly surprised, by the initial setting that allows you to change the 3d navigation interactions: you can set it to Maya style, i.e. Alt + left click, Mid click, right click. For all your actions, i.e. tumble, pan, zoom. Or you can stay with the Blender versions, which is similar to 3dsmax, and using the Middle mouse for most things.

At first glance, the colours, that is, the grey/dark, are similar, this provides an initial level of familiarity. At a basic level, a user will be reassured by this. However, what becomes glaringly obvious between the two interfaces, is how may more icons, and affordances - The Maya interface has.

So back to the Blender interface. I was pleased I could have my Maya navigation, but I didn't know what to do next. I usually begin each project by throwing a cube into the scene, that forms the basis of what I will build. Blender gives you that cube by default. But you don't know where it came from.. How you get it back if you mess it up and delete it? How do you move it??

You look at the Maya UI.. There in bright orange you see all the primitive shapes you will ever need.. all kinds.. And to back that up.. You can go to the menu and hit the "create" button.

In Blender.. there isn't a visual affordance, for a sphere.. or a cube.. or any primitive. And if you go to the top menu.. You won't find one either. You need to go into the viewport - And hot the add button..

When I initially started out I scoured YouTube videos to find this. This interaction. - Create cube - Should be the simplest thing in the world. Yet it's not.

The Golden rules (Prof Alan Dix)  of UX Navigation design, tied in with the "affordances". Is that a user needs to know.

  1. Where they are
  2. What they can do next
  3. Where they came from and how to go back
  4. Where the next action will lead them

These factors also tie in with memory, and how it affects cognitive load. Recognition and recall, are better means of optimising cognitive load during an interaction. Give people a UI or a task flow that is familiar, does not need to be identical, and they will perform better, and more efficiently.
There are 3 kinds of memory, long term, mezzanine and short term. Long term memories are based upon major events in our lives. And these stay with us. Short term memory typically may last from a few seconds to a few minutes. If someone asked you to recall a reference number while on a call, you could do it. But if you were asked to recall it the next day you'd forget.
Mezzanine memory, refers to the middle term you might recall what you had for lunch yesterday.

Now memory is not photographic, unless you are Sheldon Cooper, the mind forms images, and words and feelings which are vague. But when we encounter something similar, they become vivid again. In a sense the mind fills in the blanks.

That's what you want to achieve with a UI. That sense of recollection, recognition. - Familiarity, once again this ties into agency.

Ofcourse Blender may not have had this goal in mind when they designed their UX strategy. People through repetition will eventually learn the nuances of blender. As I did. I scoured many YouTube videos just to find out how to select/move/change the interface views.. But now its familiar and doesn't take a s long.

But if one of Blender's goals is to bring in users from the established software realm. Then this UX design strategy is not the ideal.

As a side note I recently started learning Houdini, 2021.. And I'm astounded at how similar the UI is to Maya. And how similar the interactions are to packages like Unreal engine.  Like the node graphs, where you hit "tab" to create a new node.
The use of Vex language and the VOP gui. In Unreal you have blueprints and C++ programming. This in my opinion makes for successful UX design which is well thought out. because it takes out a level of Herculean cognitive effort, and saves time for the user. They'll adopt it faster.

The Maya and Houdini interface compared

The Maya and Houdini interface compared

UI considerations: The ability to scan, decoration and grouping

Visual perception, like most other human/bodily processes works to create the most results from the least effort. So the eye, scans. It takes more cognitive effort to look at each element in a UI, than it does to quickly identify the important ones.

Take a look at the Maya and Houdini interfaces. It becomes immediately apparent that the bright oranges, blues provide a strong contrast against the black. Those are the items that the eye notices first.

The layouts of the 2 apps, which ofcourse have very different interaction/task flows, are also similar.

The Tabbed layout up top. The main top menu. The left menu with tools, and the right with the "outliner" (File browser type design pattern.). And in Houdini the node graph. Here you can see care has been taken. With the principles of decoration and grouping. To make scanning, recognition and recall easier for the user.

On the Blender UI, there is also a tabbed layout. But the contrast makes it difficult to see. Also the tabbed layout and top menu are on the same line. The icons also look dull and lack contrast. It is using the 3 design patterns that the two above use, i.e.:

  1. Tabbed layout
  2. Top Menu
  3. Right menu/file browser pattern

But somehow the decoration and grouping of items is missing. And the top menu, and tabbed layout is on the same line. Which could lead to confusion. It might be worth moving these to separate lines.

Wider Usability considerations

A positive user experience also involves "accessibility". In this context I refer to accessibility with regards to how easy it is to learn. And how consistent the task flows remain. In a UI sense, I mean, how often when new functionality as added to buttons move around. And how often a series of keystrokes, or menu items one needs to access in order to complete a task changes.

In my experience, while learning Blender last year. The 8 or so videos provided by the Blender foundation as part of their YouTube channel, were not nearly as clear as what is provided by random users posting Blender content. Additionally when I arrived at Blender 2.8. Alot of the material outlined in the videos from previous years had become redundant, as the structure of the UI had changed dramatically. Which led to some frustration.

As a contrasting example, I learned Substance, and Unreal Engine simultaneously. And the content on their respective channels was on par if not better than paid learning content.

The question then, is.. Which ones are more accessible, and which ones less?

Conclusions

The intention of this piece is not to deride or bash Blender. Blender as a tool has been used within the Blender community to produce amazing work. Work that can stand shoulder to shoulder with most 3d software.
New functionality is constantly added. Perhaps the next step within the blender community, to enable wider adoption within the 3d commercial community. Is to give User Experience, and User cantered design a look.
I think Blender is a blessing to 3D artists globally it allows people to create magic. My intention with writing this article, is to prompt improvement. So that Blender usage commercially will one day be on par with the big players of 3D. I have no doubt it will.