December

More Util Demos

For the past two months, we've been majorly exploring the possibilities and pushing the boundaries of the interaction model for our augment reality aided music manipulation app.


The sensor management is yet to be combined into the system. Currently we are using the following UI to debug the sensor-audio manipulation process. Users will have the choice to pick the types of manipulation they want, as well as whether the manipulation is from the users themselves (manual) or the environment (sensors).

Another important aspect of the concept is the "dérive". We've been spending some time on figuring out the instruction model for the purpose of constructing a walk or a journey along with listening to/interacting with the music.


The basic idea is to pan the music to different channels to signal the user to take a turn at some point during his walk. In this way, the user doesn't need to focus on the screen to get on with the journey, the senses are less cluttered; for us, this is a separate model from the audio manipulation system, it is more modular and we can also put less stuff on the screen.


The logic behind the model is that we wanted to ballpark the distance of a segment of the walk, and if it's beyond some threshold, we prompt the user for a turn. But due to the constraint of COVID-19, it's been quite difficult to test this part out in real physical settings. As an equivalent, we are now measuring the time for the segment (eg. walk 20 seconds then turn left). When the "countdown" elapses, the device will start to detect an abrupt turn using its built-in accelerometer which manifests that the user has completed the last prompt turn, then generates a new turn and signals it as a channel pan.


It turns out to be easier and less overhead in the sense that it requires less variables, and it's achieving the desirable effect. (An a-little-bit-farfetched idea is to personalize the pace for each user to get their optimal experience.) For the moment, we are going to use this model as a prototype and put more effort into the music manipulation aspect. Hopefully we will have time to come back to explore this very interesting and rebellious concept soon.

I learned some Blender by following the infamous donut tutorial! It was great fun.

But that's how it looks like in Unity engine

👈🤨

Under the surface of front-end interaction, we are also thinking about the possibility of outputting, storing and sharing the creative work produced through using the application. For mobile operating systems (especially iOS), the difficulty lies in security and encapsulation issues as third party developers. The problem is not so different from the one we encountered when we were developing the universal file opener app with React: we simply are not able to access the file system or other applications on the phone. And this time it is hard (in fact impossible) to circumvent the system security measure, so we can only store the output in an untraceable folder within our app. To bypass the problem, we are in the process of connecting an external database, Google Firebase, to the scripts, and trying to export the output directly into the database. (This may or may not have already broken my Xcode, let's see in a couple of days.)

These are our updates! As the idea is morphing into more solid shapes, the members will proceed onto some additional quests for the new chapter of a year. New levels new devils, let's beat some more! Hang in there for more progress!

Kelly