21 February 2017

A generic toggle component for HoloLens apps

Intro

The following scenario is one I have seen a lot of times – the user taps on a UI element, and then it and/or a couple of elements need to fade out, disappear, whatever. I suppose every developer has felt this itch that occurs when you basically make something the same the second time around, and you feel there will be a third and a fourth time coming up. Time for spinning up a new reusable component. Meet Toggler and it’s friend, Togglable.

The Toggler

This is a simple script that you can attach to any object that will function as a toggle – a ‘button’ if you like. It’s so simple and concise I just write the whole thing in one go:

using System;
using System.Collections.Generic;
using HoloToolkit.Unity.InputModule;
using UnityEngine;

namespace HoloToolkitExtensions
{
    public class Toggler : MonoBehaviour, IInputClickHandler
    {
        private AudioSource _selectSound;

        public List<Togglable> Toggles = new List<Togglable>();

        public virtual void Start()
        {
            _selectSound = GetComponent<AudioSource>();
        }

        public virtual void OnInputClicked(InputClickedEventData eventData)
        {
            foreach (var toggle in Toggles)
            {
                toggle.Toggle();
            }
            if (_selectSound != null)
            {
                _selectSound.Play();
            }
        }
    }
}

This thing has a list of Togglable. When it’s clicked, it calls the method “Toggle” on all Togglable objects in the list, and optionally plays a feedback sound to confirm the toggle has been clicked.

The Togglable

This is almost embarrassingly simple.

using UnityEngine;

namespace HoloToolkitExtensions
{
    public abstract class Togglable : MonoBehaviour
    {
        public abstract void Toggle();
    }
}

and in itself completely uninteresting. What is interesting though is that you can use this base class to implement behaviours that actually do something useful (which is the point of bas classes, usually. D’oh). I will give a few examples.

A toggleable that ‘just disappears’

Also not very complicated, although there’s a bit more to it than you would think

namespace HoloToolkitExtensions
{
    public class ActiveTogglable : Togglable
    {
        public bool IsActive = true;
        public virtual void Start()
        {
            gameObject.SetActive(IsActive);
        }

        public override void Toggle()
        {
            IsActive = !IsActive;
            gameObject.SetActive(IsActive);
        }

        public virtual void Update()
        {
            // This code to make sure the logic still works in someone
            // set the IsActive field directly
            if (IsActive != gameObject.activeSelf)
            {
                gameObject.SetActive(IsActive);
            }
        }
    }
}

To if Toggle is called, SetActive is called with either true or false and it will make the gameobject that it’s attached to flash in and out of existence.

A toggleable that fades in or out

This is a bit more work, but with the use of LeanTween animating opacity is pretty easy:

using UnityEngine;

namespace HoloToolkitExtensions
{
    public class FadeTogglable : Togglable
    {
        public bool IsActive = true;
        public float RunningTime = 1.5f;
        private bool _isBusy = false;
        private Material _gameObjectMaterial;

        public virtual void Start()
        {
            Animate(0.0f);
            _gameObjectMaterial = gameObject.GetComponent<Renderer>().material;
        }

        public override void Toggle()
        {
            IsActive = !IsActive;
            Animate(RunningTime);
        }

        public virtual void Update()
        {

            // This code to make sure the logic still works in someone
            // set the IsActive field directly
            if (_isBusy)
            {
                return;
            }
            if (IsActive != (_gameObjectMaterial.color.a == 1.0f))
            {
                Animate(RunningTime);
            }
        }

        private void Animate(float timeSpan)
        {
            _isBusy = true;
            LeanTween.alpha(gameObject, 
                IsActive ? 1f : 0f, timeSpan).setOnComplete(() => _isBusy = false);
        }
    }
}

Initially it animates to the initial state in 0 seconds (i.e. instantly), and when the Toggle is called it animates in the normal running time from totally opaque to transparent – or the other way around.

There is a little caveat here – the object that needs to fade out then needs to use a material that actually supports transparency. So, for instance:

image

So what is the point of all this?

I have created a little sample application to demonstrate the point. There is one ‘button’ – a rotating blue sphere with red ellipses on it, and four elements that need to be toggled when the button is clicked – two cubes that simply need to wink out, and two capsules that need to fade in and out:

image

You drag the ActiveTogglable on both cubes, and FadeTogglable on both capsules. In fact, I did it a little bit different: I made prefab of both cube and capsule and dragged two instances on the scene. Force of habit. But in the end it does not matter. What does matter is that, once you have dragged a Toggle script on top of the sphere, you can now simply connect the Toggle and the Toggleables in the Unity editor, like this:

image

Which makes it pretty darn powerful and reusable I’d say – and extendable, since nothing keeps you from implementing your own Toggleables.

The result in action looks like this:

Why not an interface in stead of a superclass?

Yeah, that’s what I thought too. But you just try – components that can me dragged on top of each other need to be just that – components. So everything you drag needs to be a component at minimum, but you want the concrete class to be behaviours. So – you have to use a base class that’s a behaviour too. Welcome to the wondrous world of Unity, where nothing is what it seems – or what you think it is supposed to be ;)

Concluding remarks and some thoughts about 3D interfaces

Remember how Apple designed skeuomorphic user interfaces, that for instance required you to take a book out of a bookshelf? For young people, who never may have held much physical books, that’s about as absurd as the floppy disk icon for save – that is still widely used. But it worked in the real world, so we took that to the digital 2D world, even when it did no longer make sense. Microsoft took the lead with what was then called ‘Metro’ for the ‘digital native’ float design. Now buttons no longer mimic 3D (radio buttons) and heaven knows what.

We are now in the 2007 of 3D UI design. No-one has any idea how to implement true 3D ‘user interfaces’, and there is no standard at all. So we tend to fall back on what worked – 2D design elements or 3D design elements that resemble 3D objects – like 3D ‘light switch buttons’ attached to some ‘wall’. Guilty as changed – my HoloLens app for Schiphol has a 2D ‘help screen’  complete with button.

With my little rotation globe I am trying to find a way to ‘3D digital native design’, although I am not a designer at all. But I am convinced the future is somewhere in that direction. We need a ‘digital design language’ for Mixed Reality. Maybe it’s rotating globes. Maybe it’s something else. But I am sure as hell about what it’s not – and that is floating 2D or 3D buttons or ‘devices’ resembling physical machinery.

Code, as per my trademark, can be found here.

11 February 2017

A behaviour for dynamically loading and applying image textures in HoloLens apps

Intro

After two nearly code-less posts it’s time for something more code-heavy, although it’s still out of my ordinary mode of operation: it’s fairly short and not much code. So rest assured, not the code equivalent of “War and Peace”, as usual ;)

For both a customer app and one of my own projects I needed to be able to download images from an external source to use as texture on a Plane (this is a flat object with essentially only width and height). Now that’s not that hard – on the Unity scripting reference there’s a clear example how to do that. But for my own project I need to make sure I could also change the image (so reload an image on a plane that already had loaded a texture before) and I must also be able to make sure the image was not distorted by width/height ratio differences between the Plane and the image. That required a radical different approach.

Enough talk: code!

The behaviour itself is rather small and simple, even if I say so myself. It starts as follows:

using UnityEngine;

public class DynamicTextureDownloader : MonoBehaviour
{
    public string ImageUrl;
    public bool ResizePlane;

    private WWW _imageLoader = null;
    private string _previousImageUrl = null;
    private bool _appliedToTexture = false;

    private Vector3 _originalScale;

    void Start()
    {
        _originalScale = transform.localScale;
    }

    void Update()
    {
        CheckLoadImage();
    }
}

The ImageUrl is property you can either set from code or the editor and points to the location of the desired image on the web, ResizePlane (default false) determines whether or not you want the Plane to resize to fit the width/height ratio of the image. You may not always want that, as the center of the Plane stays in place. For instance, if the Plane’s top is aligned with something else. If the resizing makes the Plane’s height decrease, that may ruin your experience.

The other first three privates are status variables, the last one is the original scale of the plane before we started messing with it. We need to retain that, as can’t trust that scale once we start messing with it. I have seen the Plane become smaller and smaller when I alternated between portrait and landscape pictures.

The crux is the CheckLoadImage method:

private void CheckLoadImage()
{
    // No image requested
    if (string.IsNullOrEmpty(ImageUrl))
    {
        return;
    }

    // New image set - reset status vars and start loading new image
    if (_previousImageUrl != ImageUrl)
    {
        _previousImageUrl = ImageUrl;
        _appliedToTexture = false;
        _imageLoader = new WWW(ImageUrl);
    }

    if (_imageLoader.isDone && !_appliedToTexture)
    {
        // Apparently an image was loading and is now done. Get the texture and apply
        _appliedToTexture = true;
        Destroy(GetComponent<Renderer>().material.mainTexture);
        GetComponent<Renderer>().material.mainTexture = _imageLoader.texture;
        Destroy(_imageLoader.texture);

        if (ResizePlane)
        {
            DoResizePlane();
        };
    }
}

This might seem mightily odd if you are .NET developer, but that’s because of the nature of Unity. Keep in mind this method is called from Update, so it’s called 60 times per second. The flow is simple:

  • If ImageUrl is null, just forget it
  • If an ImageUrl is set and it is a new one, reset the two status variables and make a new WWW object. You can see this as a kind of WebClient. Key to know it’s async, and it has a done property, that only gets true when it’s downloading. So while it’s downloading, the next part is skipped
  • If, however the WWW object is done, we will need to apply it to the texture, but only if we did not do so before. So then we actually apply it.

So after the image is applied, the first if clause is false, because we have an ImageUrl. The second one is false, because the last loaded url is equal to the current one. And finally, the last if clause is false because the texture is applied. So although it’s called 60 times a second, it essentially does nothing. Until you change the ImageUrl.

An important note – you see that I first destroy the existing Render’s texture, then load the WWW’s texture into the renderer, and then destroy the WWWs texture again. If you are using a lot of these objects in one project and have them change image regularly, Unity’s garbage collection process cannot keep up and on a real device (i.e. a HoloLens) you will run out of memory soon. The nasty thing is this won’t happen soon in the editor or an emulator. This is why you always need to test on a real device. And this is also why I had to update this post later ;)

Resizing in correct width/height ratio

Finally the resizing, that’s not very hard it turns out. As long a you keep in mind the Plane’s ‘natural posture’  is ‘flat on the ground’, so what you tend to think of a X is indeed X, but what you tend to think of as Y, is in fact Z in the 3D world.

private void DoResizePlane()
{
    // Keep the longest edge at the same length
    if (_imageLoader.texture.width < _imageLoader.texture.height)
    {
        transform.localScale = new Vector3(
            _originalScale.z * _imageLoader.texture.width / _imageLoader.texture.height,
            _originalScale.y, _originalScale.z);
    }
    else
    {
        transform.localScale = new Vector3(
            _originalScale.x, _originalScale.y,
            _originalScale.x * _imageLoader.texture.height / _imageLoader.texture.width);
    }
}

It also turns out a loaded texture comes handily with it’s own size attributes, which makes it pretty easy do to the resize.

Sample app

I made this really trivial HoloLens app that shows two (initially empty) Planes floating in the air. I have given them different with/height ratios on purpose (in fact they mirror each other):

image

I have dragged the behaviour on both of them. One will show my blog’s logo (that’s a landscape picture) and one comes from an Azure Blob container and shows portrait oriented picture of… well, see for yourself. If you deploy this app – or just hit the play button in Unity, the will initially show this:

image

If you air tap on one of the pictures you get this:

image

In the second picture, the pictures are a lot larger as they fit ‘better’ into the pane. If you click a picture again they will swap back. The only thing that actually changes it the value of ImageUrl.

Bonus brownie points and eternal fame, by the way, for the first one who correctly tells me who the person in the picture is, and at what occasion this picture was taken :D.

Some concluding remarks

If you value your sanity, don’t mess with the rotation of the Planes themselves. Just pack them into an empty game object and rotate that, so the local coordinate system is still ‘flat’, as far as the Planes are concerned. I have had all kinds of weird effects if you start messing with Plane orientation and location. Not sure why this is, probably things I don’t quite understand that but – you have been warned.

This is (or will be part) of something bigger, but I wanted to share the lessons learned separately, preventing them to get lost in some bigger picture. In the mean time, you can view the demo project with source here.

03 February 2017

Using a HoloLens scanned room inside your HoloLens app

HoloLens can interact with reality – that’s why it’s a mixed reality device, after all. The problem is, sometimes, the reality you need is not always available. For example, if the app needs to run in a room or facility at a (client) location you only have limited access to. And you have to locate stuff on places relative to places in the room. Now you can of course use the simulation in the device portal and capture the room.

image

You can save the room into an XEF file and upload that to (another) HoloLens. That works fine runtime, but in Unity that doesn’t help you much with getting a feeling of the space, and moreover, it messes with your HoloLens’ spatial mapping. I don’t like to use it on a real live HoloLens.

There is another option though, in the 3D view tab

image

If you click “update”, you will see a rendering of the space the HoloLens has recognized, belonging to the ‘space’ it finds itself in. Basically, the whole mesh. In this case, my ground and first floor of my house (I never felt like taking the HoloLens to 2nd floor). If you click “Save” it will offer to save a SpatialMapping.obj. That simple WaveFront Object format. And this is something you actually can use in Unity.

Only it looks rather crappy. Even if you know what you are looking at. This is the side of my house, with left bottom the living (the rectangular thing is the large cupboard), on top of that the master bedroom* with the slanted roof, and if you look carefully, you can see the stairs coming up from the hallway a little right of center, at the bottom of the house.

image

What is also pretty confusing it the fact meshes can have only one side. This has the peculiar effect that at a lot of places you can look into the house from outside, but not outside the house from within. Anyway. This mesh is way too complex (the file is over 40mb) and messy.

imageFortunately – there’s Meshlab. And it’s free too. Thank heavens, because after you have bought a HoloLens you are probably quite of money ; )

Meshlab has quite some tools to make your Mesh a bit smoother. Usually, when you look at a piece of mesh, like for instance the master bedroom, it looks kinda spikey – see left. But after choosing Filter/Remeshing, Simplication and Reconstruction/Simplification: Quadratic Edge Collapse Decimation

imageimage

My house starts to look at lot less like the lair of the Shrike – it’s more like an undiscovered Antonio Gaudi building now. Hunt down the material used (in the materials subfolder), set it to transparent and play with color and transparency. I thought this somewhat transparent brown worked pretty well. Although there’s definitely still a lot of ‘noise’, it now definitely looks like my house, good enough for me to know where things are – or ought to be.

Using this Hologram of a space you can position Kinect-scanned objects or 3d models relative to each other based upon their true relative positions without actually being in the room. Then, when you go back to the real room, all you have to to is to make sure the actual room coincides with the scanned room model – add world anchors to the models inside the room, and then get rid of the room Hologram. Thus, you can use this virtual room as a kind of ‘staging area’, which I successfully did for a client location to which physical access is very limited indeed.

You might notice a few odd things – there are two holes in the floor in the living room – that is where the black leather couch and the black swivel chair are. As I’ve noticed before, black doesn’t work well with the HoloLens spatial mapping. Fascinating I find also the rectangular area that seems to float about a meter from left side of the house. That’s actually a large mirror that hangs on the bedroom wall, but the HoloLens spatial mapping apparently sees it as a reclined area. Very interesting. So not only this gives you a view of my house, but also a bit about HoloLens quirks.

The project showed above, with both models (the full and the simplified one) in it, can be found here.

* I love using the phrase “master bedroom” in relation to our house – as it conjures up images of a very large room like found in a typical USA suburban family house. I can assure you neither our house nor our bedroom do justice to that image. This is a Dutch house. But is is made out of concrete, unlike most houses in the USA, and will probably last way beyond my lifespan