What I’ve learnt demoing VR to first timers.

Those who have the privilege of owning a Virtual Reality (VR) headset I’m sure has shown VR to someone who hasn’t experienced it before, I like to tag these people as VR first timers (jokes), I don’t actually like labelling things but for the sake of this post I will use it. Anyway back to the subject at hand.

I own a HTC Vive and it’s currently set up in my spare bedroom at home, I feel very privileged to own this hardware and to actually have a PC that runs it. I take advantage of this by allowing anyone to have a go on my VR rig.

HTCVive

Most of my family and friends have never heard of VR, they aren’t exactly geeky tech heads like myself, one of my friends thought it was some weird spy camera rig set up, having described that it’s the next big thing called Virtual Reality, my friend still didn’t get it and just brushed it off, I even had to plea with him just to give it a go, I was begging him to try it to see…I had a similar experience with my girlfriend as well.

IMG_2778.JPG

I have demoed VR to a lot of people at various technology conferences, typically at a Unity booth at a conference we would have some form of VR demo set up, whether it’s an Oculus VR, HTC Vive or GearVR. The advantage at a technology conference is I don’t have to beg people to try it.

What’s the different between the three? Quite drastic in my opinion and this question is relevant to the topic of this post because you need to think about what type of VR experience you would like to show to a VR first timer. I don’t want to delve into this question too much though but in simple terms Oculus is a sitting or standing VR experience connected to a PC, GearVR is a mobile experience, a device that has a slot to place your Samsung device into and the HTC Vive (saving the best ’till last) which is a room-scale VR headset connected to a PC (can also be a standing or sitting experience).

IMG_1959.JPG

GearVR has probably been the most accessible VR experience available on the market at present, given there’s some hundreds of millions of Samsung devices in the hands of consumers that are VR ready puts Oculus and Samsung in very powerful position, I personally believe in using the highest quality experience for showing VR to newbies, so that leaves Oculus and HTC Vive. In my experience demoing a mobile level quality VR demo to VR first timers hasn’t delivered the same experiences as other VR headsets, I would love to hear if someone has seen the opposite experience so please comment on this post below.

Let’s start with Oculus, Oculus delivers a higher level immersion, from my experience demoing to consumers, reactions to the Oculus have been along the lines of higher quality graphics, better immersion, this is partly due to the technology which in turn allows developers to push the barriers of immersive experiences. Considering Oculus runs off a high-end PC and not a mobile device (same for the HTC Vive). I’m not saying GearVR apps can not create high levels of immersive experiences…Immersion is not defined by graphics quality and or technology but it can help.

Moving on to the HTC Vive, In my opinion and from my experience demoing to VR first timers and talking to professional developers, the HTC Vive delivers the best VR experience on the market. Partly due to the room-scale features it offers, this is a complete game changer in a VR experience, being able to physically move around the virtual room or space you are experiencing is extremely powerful and is a superb feature for us developers to experiment with with new game designs.

What’s really strange is the ability to move around a virtual room seems to confuse consumers at first, almost every VR first timer who tried the HTC Vive didn’t think about walking around the room, it seems to come across as an unnatural thing to do, which is true, think about it; Almost all core gaming experiences are static, since the beginning of video games you have never had the opportunity to physically move around a virtual space to effect the gameplay, the Wii was really the first global success in doing physical movement to effect gameplay (is there a term for that?) but you were limited as you had to face towards the TV.

Content on the HTC Vive has introduced various input designs that allow you to traverse virtual environments in various ways, there has been a few talks on this and is a constant topic at present, many developers experimenting with new ideas and also defining input standards within VR. Teleportation is one common technique which doesn’t really work for all experiences, the ability to teleport already defines the genre of your game, I mean..surely your game has some sci-fi based advancements right? As teleportation is not a natural experience for a human, the power of teleportation can break the immersive experience you are having within the content. In a general positive VR empowers you to experience what a teleportation might feel like, in reality it’s a win win situation for a consumer.

From a VR first timer perspective the teleportation technique completely freaks people out at first, I tend to show VR first timers Valve’s demo called “Aperture Robot Repair” which has a cool teleportation implementation, but it takes a few times to get used it at first, especially amusing is when the first time they use it and mess it up by teleporting up against a wall it strikes shock. Other implementations involve various techniques which I won’t go into in this post.

My brother tried a VR experience which placed the player up on a high beam, this totally freaked him out, having a fear of heights didn’t help but for my amusement seeing him try and balance himself on a flat surface (i.e. the bedroom floor) was rather good.

So, moving on from this, I would highly recommend showing VR first timers the HTC Vive, it really delivers the best experience VR can currently offer, the room-scale capabilities enhance the experience tenfold.

My order of content I demo to VR first timers:

theBlu:

theBlu.jpg

A cinematic experience which introduces VR first timers to room-scale VR with no interaction required, limiting the senses and allows us to adapt to the virtual environment and power of room-scale VR. Everyone I have shown this to has been blown away by this experience, especially when a big whale swims by. Generally people are super convinced at this point.

Aperture Robot Repair:

Adding an extra layer into the room-scale VR experience with the ability to be able to interact with objects in a virtual room, the human senses are going a bit more crazy with this experience, usually receive a lot of nervous laughter and “whoa!!” outbursts. Introduces teleportation input technique, which freaks people out at first. Generally; feedback has been very positive with this experience.

TiltBrush:

TiltBrush.jpg

An artistic VR experience, empowering the user to be creative with painting and drawing in VR, user can move around and interact via painting strokes or objects into a virtual space. Usually at this point a majority of users have adapted to a VR room-scale experience and everything is starting to feel more natural. As far as I know there’s no teleportation technique implemented.

After they are done with these three experiences, I offer a choice of content next, this means they can pick something that might interest them or pick something that might enhance a human emotion such as fear as in horror or fear of heights, so here is my list in no particular order:

  • Job Simulator
  • The Rose and I
  • Fantastic Contraption
  • Space Pirate Trainer
  • AudioShield
  • The Brookhaven Experiment
  • CloudLands: VR Minigolf
  • Skeet: VR Target Shooting
  • Selfie Tennis
  • The Lab

I  acknowledge content and peoples interests plays a big part in personal VR experiences and notice I mentioned “personal”, from what  I’ve seen everyone’s VR experience is unique and personal, people react to different experiences differently, it can effect them emotionally, such as being excited, nervous, funny, scared, peaceful etc…it would be great to hear what content you like to show to VR first timers and also the reactions from these experiences you’ve seen, please comment below.

So what have I learn’t exactly, lets summarise:

  • Demo the best technology on the market i.e. HTC Vive with room-scale VR
  • Ask the individual if they have any fears before letting them play
  • Always ensure the player doesn’t step on the headset cable
  • Start with the most basic / limited experience for example theBlu which is awesome
  • Begin introducing more complex VR experiences which involves interaction and movement
  • Then go wild if they feel comfortable continuing

Notice I haven’t covered nausea in this post, content on the HTC Vive and the technology shouldn’t cause this but may be experienced with specific people, I haven’t had one person experience nausea with the HTC Vive but I’m sure someone out there has.

Thanks all!

Experimenting with HDR Skyboxes in Unity 5.

Unity 5 has implemented a HUGE update on the rendering / graphics side of the engine, introducing new lighting workflows with realtime GI, a Physically Correct Shader (Metallic & Specular workflows supported) among many other things..

I wanted to do an experiment today, where I test out HDR Skybox’s in Unity 5 to see how drastically the lighting and mood of a scene can change.

HDR:

High-Dynamic-Range is an imaging photography technique to produce a higher dynamic range of brightness. This is achieved by capturing different exposures of your chosen subject matter and combining them into one image.

In Unity we can use these HDR images to help blend 3D models into the environment, this can drastically add to the belief that the 3D model is actually in the environment.

Quick mention – I’m using HDR images from NoMotion HDR’s – 150 free HDR images, check EULA before using for commercial usages.

I’ll be showing how these 3 HDR images help create a completely different feel and look to the scene:

I have daytime, evening and night-time HDR’s, they should all create drastically different lighting conditions in Unity.

Time’s up chumps..let’s do this:

Unity 5’s new standard shader has built-in support for image based lighting (IBL), this really helps with the belief that the 3D model is in the environment I am setting with the HDR skybox.

Here’s the model with just Unity 5’s current default procedural skybox:

Default Skybox

With a bit of trickery (not really, just pushing a few buttons in Unity), let’s now see what happens when I add the daytime HDR Skybox:

Daytime Skybox

I haven’t messed with any lights at all, all I have is the default Directional Light in the scene with default settings. looks rather impressive after just adding a new HDR Skybox to the scene.

Let’s try the evening HDR Skybox now:

Evening Skybox

Awesome, I am really enjoying playing around with this, I love how quickly I can completely change the lighting conditions which in turn drastically changes the mood and atmosphere in the scene. I can’t wait to see how games are going to utilize these new features.

Okay, last one, the night-time HDR Skybox, I’m a little skeptical about this one, I’ve no idea how this will look and I imagine it’s not really desirable to use a night-time HDR image. Anyway, let’s see what it looks like:

Nighttime Skybox

Actually turned out rather well, the image doesn’t show the white spots on the model as much as in the editor, these white spots are produced by the specular smoothness on the Standard Shader, i.e the more smooth I make it the more white spot artifacts are produced, not sure why this is more present in the night-time scene, I’m sure there’s a setting I’ve missed or something..

Overall, I’m really impressed with this, especially how quickly I can change the mood and lighting of the scene and the visual output from Unity 5’s new rendering / graphics update.

Look forward to seeing what you all produce with Unity 5. 🙂

R&D with new Unity 5 Graphics..

My job is based on supporting Unity’s customer base in the EMEA region, to do a good job that means I need to learn all of the Unity things, features and new services, at least on a high level.

I tend to experiment a lot with Unity’s new features, today I wanted to share with you some of my R&D with Unity 5’s new Graphics overhall, this includes the new Physical Based Shader, Realtime Global Illumination and the new Scene Render Settings.

My experiments are usually aimed at producing a small scaled demo that can squeeze in as many features as possible, this enables me to demonstrate these features easily to customers while in the field.

Field

PBS (Physically Based Shader):

Unity’s new Physically Based Shader a.k.a one shader to rule them all a.k.a standard shader (actual name) allows us to create materials for a wide range of natural surfaces such as Metals, Dielectrics (non-metals): monolithic materials = rock, water, plastic ceramic, wood, glass etc..cloth and organic.

The new PBS also plays nice with IBL (Image Based Lighting), we can setup a skybox cubemap in the new Scene Render Settings to help with really blending our objects into the surrounding environments:

Scene Render Settings

One demo (not developed by me) shows a nice range of different surfaces used by the new standard shader in Unity 5:

We can see there’s at least six different surfaces represented here with the usage of just one shader – Ceramic, Cloth, Metal, Glass, Rock and Wood. The Scene Render Settings really help blending the Doll model into the surrounding area, helping us believe that the Doll is in the Forest environment.

The new shader includes many different texture slots, allowing you to add really nice detail to models. The shader includes a multi-variation of smaller shaders with versions for mobile and high-end.

Standard Shader PBS

Our built-in shader code is always available for download and currently with Unity 5 this will include the new standard shader as well – Could change but I doubt it.

Realtime Global Illumination:

The lighting for realtime is half pre-computed realtime lighting, allowing you to dynamically light your scene, dynamically changing: light sources, environment lighting and material properties such as diffuse reflective and surface emission.

Geometry needs to be marked lightmap static but you can relight geometry using Light Probes that are updated in realtime with the GI generated from the static geometry. In my little demo I’ve combined the usage of Realtime GI, PBS, Reflection Probes and Light Probes, the majority of the objects marked as static apart from a few props which demonstrate the usage of Light Probes for non-static objects:

Couple of shout outs, I’ve used the Medieval Assets Pack by Naresh and the Medieval Boat from Mr Necturus which are available on the Asset Store for most of the props. The wooden cottage model center piece is a free model from some site I can’t remember.

Here’s a Vine I recently sent out, little outdated comparing to the above screenshots, but demonstrates the realtime GI in editor with play mode on:

There’s more of this to learn as Unity 5 develops through the beta stage. Note: Screenshots from beta version of Unity 5 – May look different when released.

Also worth sharing is what Alex Lovett is doing with the Unity 5 beta and Realtime GI: http://forum.unity3d.com/threads/unity-realtime-reflections-and-gi-and-realism-exploration.266258/ – Now if only my Realtime GI R&D looked like that 😀

Localization Support with Unity!

LanguageManage

New users to Unity tend to ask about an integrated support for localization within the editor (built-in tools), currently Unity does not have this, so users ask for solutions and what’s available to get this setup. I found a free package on the asset store titled Language Manager – which is a key based system and easy to integrate multiple language support for your games and apps. Let’s take a deeper look into the package and highlight the ease of use in getting it setup:

Getting Started:

  1. Create an AssetStore Account if you haven’t got a UDN account setup already
  2. Do a search for Language Manager or follow this link for the browser version: https://www.assetstore.unity3d.com/#/content/1018
  3. Download and import the asset package into a new project – Best to test it out before directly importing into your current professional project

Within your new Unity project, you will see a folder named “LanguageManager” in the project window. The folder contains the scripts and resources needed to add localisation support to your game or app, included support already for 6 different languages. We shall take a quick look at the sample scene included.

Double click the sample scene to open – click the main camera in the hierarchy window and notice the script component named: TestScript.

The script contains:

  • A GUI Selection Grid, allowing the user to press a GUI button to switch languages
  • A public string which gives the option to select the default language in the inspector
  • A Switch statement containing support for all 6 languages (English, Spanish, French, Italian, Chinese and Russian)

Hit play and observe the two GUI sentences at the top left of the game view are rendered in your chosen default language, the GUI buttons below allows the user to switch to a different language, clicking Russian will update the text to be displayed in Russian characters as well as for the GUI buttons. This example scene can be adapted to be used as a Language menu screen at the beginning of your game / app or in the main menu screen

It’s also worth mentioning you can get the system / OS language from an API call Application.systemLanguage – this returns the users OS default language, works on Mobile devices as well as PC, Mac and other major platforms, some are not supported so ensure to test that out. Example code for this in C# of course:

using UnityEngine;
using System.Collections;

public class OSLanguage : MonoBehaviour
{
void Update()
{
guiText.text = Application.systemLanguage.ToString();
}
}

Back to the LanguageManager package,. There is a custom Window included in the Unity Window drop down menu -> Language Editor, this allows the user to create new keys and create files for different languages (I imagine as many you like). The package contains support for an initializer – LanguageManager.LoadLanguageFile(defaultLanguage); and use it by calling – LanguageManager.GetText(“string key”); That’s where you relate your key’s added in the Language Editor window, for example the string key for  English language is “english”. 🙂 – with just a few lines of code you can get basic support for multiple languages within your games / apps and apply them to GUI elements in your scene/s.

Take a look at the package, it’s a free download after all, also checkout the many other packages on the Asset Store here.

Unity5 Announced!

Unity5http://unity3d.com/5

I’m getting straight to the point here, what is in included in Unity5:

Physically-based Shading:

There’s a new shader to setup great looking materials in a range of lighting environments, it’s one shader to rule them all, an uber-shader one might call it, you can use it for a range of different surfaces such as wood, metal, plastics, ceramics, cloth and many others.

Unity5_Teleporter

Realtime Global Illumination:

Built upon Geomerics Enlighten Technology, Unityhas integrated realtime physically-based Global Illumination for cross-platform, runs super nice on mobile / tablet devices. You can animate lights, setup beautiful environments lighting and make use of emissive materials to create stunning effects and visuals. What’s really nice, as an added bonus, you now don’t need to be rebaking any lightmaps, which is especially painful when bake times are long for larger scenes, Global Illumination updates immediately upon making any changes to help dramatically increase iterative times.

WebGL:

The plugin-less browser technology is approaching fast and Unitywill offer the option to deploy to WebGL without the need for a plugin download to playback content, with a one-click deploy system the building times are super fast and simular to what our WebPlayer plugin build system is like.

Audio Mixer:

New audio mixng technology enters Unitywith simple workflows for setting up different sounds within your 2d / 3d games. Setup realtime mixing graphs, ability to edit tweak in play mode, create and blend between snapshots, insert effects into the mixers, implement ducking of sounds and many more..

UnityCloud:

This service offers the ability to integrate cross-promotion campaigns for acquiring players and help with retaining them

64-bit Editor:

64-bit editor brings massive improvements to Unity for handling demanding tasks 32-bit version might just crash on with out of memory, the runtime was ported a while back now, but getting the editor ported with all the dependencies took time.

PhysX 3.3:

The much requested update to PhysX has arrived, NVIDIA completely rewrote the system, bringing excellent performance boosts which is great for mobile / tablet devices. A new wheel collider is available amongst other things, more PhysX 3.3 features will be exposed later in the 5.x cycle.

There’s many more features in Unitywhich will just populate my blog, but here is more smaller but equally juicy feature set:

  • AI: NavMesh supports LoadLevelAdditive.
  • NavMeshObstacle supports two basic shapes – cylinder and box for both carving and avoidance.
  • Editor: The editor is now a 64-bit application.
  • Graphics: Improved ambient lighting.
  • Cubemaps support texture compression
  • Improved LODGroup. A “fade mode” can be set on each level and a value of “how current LOD be blended/faded to the next LOD” will be passed to shader program in unity_Scale.z.
  • Non-uniformly scaled meshes no longer incur any memory cost or performance hit.
  • PluginInspector: new plugin system.
  • Scripting: Introduced option to auto-update obsolete Unity API usage in scripts / assemblies.
  • Version Control: Scene and Prefab Merging.
  • Asset Store: The asset store window is now many times faster, more responsive, and looks better.
  • Model importing: Updated FBX SDK to 2015.1
  • Windows Store Apps: You can now use joysticks in addition to Xbox 360 controllers

For a more visual look at Unity5‘s new feature set, take a look at the official Unity Feature Preview video:

With such an exciting announcement I can’t wait for all you guys to get your hands on this awesome toolset!

Thanks!

Unity has 2D tools..

Yes..the much talked about, highly requested 2D feature is finally here and available to you all for free. In preparation for the Unity 4.3 release, I developed a small 2D physics puzzle game which enabled me to get up to speed with the toolset and also use for demo / presentation purposes on my travels.

This post will go through some of the new 2D tools, workflows and shed a little light on how to use the different features available.

2D Defaults

First new feature is the 2D defaults option in the project wizard, this sets up the unity engine to use 2D for defaults on things such as the texture importer, textures will now be imported as sprites automatically instead of needing to change the texture type in the asset importer.

Sprite_importer

There are other subtle touches with using 2D defaults, such as the main camera setup as orthographic and not perspective.

A new addition to the scene window is a little toggle button which if clicked on toggles the scene between 2D and 3D, using 2D defaults means the scene view will have 2D toggled by default without needing to click the button on launch, navigating the scene in the x and y axis only. Also note, creating a new project with 2D defaults enabled doesn’t mean you’re now bound to 2D, all 3D and 2D features are still available.

2D_scene

In the 2D viewport in the scene view, with the move tool and the sprite selected we now have a new gizmo to play with, this makes it a lot easier to do actions such as: move, uniform / linear scaling, rotation and highlights the pivot placement. We do not need to switch between different tools, it’s all there in one place.

Sprite_gizmo

Box2D Physics Integration

Implemented is the Box2D physics engine, a free open source 2-dimensional physics engine and is considered a leading industry standard. With Unity 4.3 we now have a list of new 2D physics components:

2D_physics_components

3 / 4 of the colliders are self explanatory but the more interesting one to look at closer is the Polygon collider.

Add the component to your sprite asset in the same way you would add any component and you will see a green highlighted collider outline, the green lines indicate it’s a collider and it should roughly match the shape of your sprite.

If you added the Polygon Collider to an empty GameObject a Pentagon shape will be generated, If you expand the “collider info” in the component you will see the value of 5 Vertices, a Pentagon is made of 5 points therefore 5 Vertices.

To add more Vertices hold down shift + left mouse button anywhere on the green collider line and then position your Vertices, to delete Vertices use ctrl + left mouse click in the same fashion as adding them, you will see the green collider line update to red to indicate delete is available:

Polygon_collider_info

The Sprite Editor

There’s a new window to edit sprites in, it’s called the Sprite Editor and can be opened in two ways, Window -> Sprite editor, or click on the sprite asset in the project window, in the inspector change sprite mode to “multiple” and click the Sprite Editor button.

Sprite_editor

Note: You need to change the sprite mode to multiple to be able to slice it up in the Sprite Editor.

Let’s take a look at the Slice menu, this will give you options to slice up different elements of your sprite image automatically or manually.

A typical workflow is to slice up the image manually, you can do that by clicking and dragging on the image, you should see a blue rectangular gizmo appear with handles in the corners allowing you to resize your slice area. Having positioned and resized your slice area you can add another by following the same action, and you can add as many as you like. You will also notice a new Sprite window appear giving you extra tools to play with, you can rename the area for that sprite slice and edit the size of the rectangle by coordinates. The Trim button will tightly pack the slice rectangle based on transparency, so it will fit nicely along the edges of the sprite image.

In many cases you can probably use the automatic slicing, Unity will do the work for you and therefore save some time. Using the Automatic slicing option, the Sprite Editor will guess the boundaries of the sprite _again_ by transparency.

The Grid is another slicing option available, very useful for rectangular sprites, the Pixel Size means you can define the height and width of the tiles in pixels.

After automatic slicing you can still edit the slices manually, also use the Trim button to tighten them up.

Lot’s of other cool stuff included such as using the Animation window to easily animate sprites, using Mecanim for 2D blendtrees but I won’t cover that right now.

To finish up here’s some useful info:

Box2D Performance tips:

  • Try to avoid a lot of OnCollisionStay callbacks, this will be expensive.
  • Profiler is your friend, scroll down in the Profiler to Profile Physics2D and profile early in your development cycle.
  • Using the Polygon Collider can be expensive if lot’s of Vertices are needed, Polygon Collider decomposes the Sprite into lot’s of shapes which has a huge overhead.
  • Each shape can produce up to two contact points, having a Sprite with 50-80 shapes then it could produce double that in contacts.
  • Keep an eye on the amount of contacts from dynamic bodies, if they don’t collide you can have hundreds maybe thousands moving around, when they come into proximity performance starts to crumble
  • Circle Collider is your friend.
  • Consider all of the above especially when targeting low-end mobile hardware

That’s all folks!

Modelling a Sports Stadium

Rugby Skills Challenge 2013

I’ve had a great month and half or so developing a 3d model and beta testing on a recently released iOS title by game developer Russ Morris.

We got talking about his Rugby game at the Unity UK Christmas party and Russ mentioned about needing a Stadium model for his game, naturally I said “hey, I’ll give it a go” and after a lengthy conversation in which I actually remember the day after, I nagged him for a bit to send over the details, thankfully Russ obliged and the creative development began.

So I wanted to blog about the development of the stadium model and some of the techniques I used, trying to keep it as low poly as possible for mobile development.

Starting point:

My starting point is the centre piece, the field of play, I needed to research the typical dimensions for a Rugby pitch. Modelling the basic rectangular shape then expanding out from there, having the focus point in place gave me a good start in terms of thinking and planning the architecture of the model. Using reference images taken from Cardiff’s Millennium stadium and the Aviva stadium in Ireland, the important thing was to ensure the shape and architecture of the model was directed at the main focus point (the field).

Pitch

The design is to have the curved corners in the tiers, so the stadium is a full oval curved shape like most modern day stadiums have. From here I started building up the second tier and adding some basic roof structure just to get an idea and feel for the model.

Adding details:

I quickly moved on to adding details to the tiers, adding steps and entrance points for isles, the challenging area was creating the corner tiers, I used the technique of extruding and rotating each time by hand, but I could of used the bridge tool and add the correct amount of segments, then reposition each set of poly’s, either technique would of worked fine.

Adding Detail

I finished the lower tier, so I had a fully completed lower tier with steps and isle details added, the good thing is, for the top tier, I can duplicate the lower tier and make adjustments to the positions of some poly’s and up scale when needed, no need to fully model the top tier from the start again. Here’s both tiers fully completed:

Fully completed stands.

I needed to close the gaps up and model some outer geometry, I modelled a bunch of cubes and joined the verts together at each adjacent point, It was just about getting the right positions and joining the correct verts together.

Creating the roof:

The roof structure design is to be based on the Millennium stadium in Cardiff, the corners needed to have gaps, the support structures needed to be added as well.

The Millennium Stadium
Adding roof structure
Support structure

Combining all this together gives me something nearly complete, all the rest of the tweaks were added by Russ in the game project, such as texturing and advertising boards etc.. Here’s the final model in Modo:

Rugby Stadium

Download and install the game now, you can download it by searching for Rugby Skills Challenge 2013 on the app store or by clicking the image below which redirects you to the app store:

appstoreThanks!

Unity to Modo and Back Again!

This is somewhat a long overdue post which shares my initial discovery of Modo so here goes…

I finally took the plunge into the Modo universe, having been aware of it’s existence for some time and as an explorer of new technologies and software I couldn’t resist, my curiosity got the better of me damn it!

I’ve been working on the project for some time now, I wanted to design and model the interior of a flat, using the flat I live in now for reference, my aim was to model everything in Modo and then import the scene into Unity, using Unity’s rendering and post processing effects to bring the scene to life.

Everything has a beginning..

I began learning Modo following various tutorials from AppleSoldier and The3dNinja, doing proper n00b tutorials just to get a feel for the package and understand my way around the UI, after becoming familiar with the workflow and modelling basic shapes I felt quite confident with jumping ahead and start modelling some of the assets needed for my scene. Before doing that however, I did some technical drawing which included measurements and positioning of objects which created a basic layout for my scene. Here is the basic layout modelled in Modo:

Image

Moving on from this I began modelling more complex assets such as furniture and other props and objects needed:

Each object and prop had been UV’ed and saved as .lxo files. I really like that Unity uses native files as saving an edit to a model in Modo would automatically reimport that model in Unity to reflect the change, which is great because it happens almost instantly and this is amazing for iterating.

I imported all the assets into the initial basic layout scene in Unity, positioning each object precisely which gave me something like this:

The scene is actually lightmapped in Unity using Directional lightmaps with a Max atlas size of 4096 x 4096.

I found the workflow between both packages relatively easy, as long as you save .lxo files to your Unity assets folder inside your project then all good.

Moving on to texturing, I decided to use the library of Substance materials available on the Unity Asset Store, I hadn’t used Substance materials before in what I consider a proper project, I had only messed around with them but they add a great deal of realism and depth to your models and I wanted to explore that more.

An example:

After adding all the textures, some objects using Substance materials and other objects using different textures I had produced a scene which I am happy to show:

Thanks!