Category Archives: tracking

Single Character 2 Person Climbing Game

A French student Agathe Limouzy (Toulouse) was an intern here at Curtin, I mentored her for a game design project. It was supposed to be cyber-archaeology but morphed slightly into a two person controlling single character climbing game, using an HTC Vive and a leap Controller (tracking hands) attached via a bandana. The person with the Leap can climb or send hand directions to the person in the head mounted display, who controls the legs.

Short video at: https://twitter.com/curtinmakers/status/1042714070120448000

Kinect & HMD collaborative engagement

Corbin is my summer intern, looking at
1. Kinect-Minecraft v2: a software framework for non-programmers to create their own gestures for Minecraft interaction: https://www.youtube.com/watch?v=09tc3nLgx9w

See also: https://maker.library.curtin.edu.au/2016/08/02/creating-a-gui-for-kinect-v-2/

2 Kinect-Unity pointer software:

3. Point clouds with a Head Mounted Display (HMD) /Unreal. Status: exploratory.

Reference http://digitime.nazg.org/index.php/2016/10/09/exploring-massive-point-clouds-in-virtual-reality-with-nvidia-tech-demo/

See also CAA2017 slides from Damien Vurpillot: https://www.academia.edu/30171751/Exploring_massive_point_clouds_how_to_make_the_most_out_of_available_digital_material

4. Corbin will narrow down the above into one main investigation. Evaluate: sharing virtual experiences across different displays (cylindrical versus HMD): to uncover similar papers with a collaborative learning focus. Ideally there will be a comparison of Unity versus Unreal.

 

 

 

new project 1: HMD Augmented Reality Heritage Trail study

The following was a successful grant, funded by the Curtin Institute of Computation.

The program/research plan:

In 2016 the Chief Investigator (CI) organized a one day talk and workshop on cultural heritage visualization, (“GLAM-VR”, Curtin HIVE, http://slides.com/erikchampion/glamvr16-26-08-2016 ) and helped facilitate a related makerspace event (“Cultural Makathon at Curtin Library Makerspace”, URL: http://slides.com/erikchampion/deck-4#/fullscreen#/ ). All groups of students finished their projects apart from one single individual group encountering trouble designing inside a 3D game engine. For the Augmented Reality 2016 makerspace tutorials, there was similar difficulty in finding suitable tutorial material. Unfortunately, there are few tutorials and examples for augmented reality and 3D game engines for hackathon or makathon events. There is even less material for cultural heritage augmented reality tours. And there is no academic feature list survey and comparison of recent augmented reality headsets for cultural heritage tours, where one walks along a heritage trail using the augmented reality headset (HMD) for augmented information.

This 2017 pilot study will aim to resolve this issue by providing an exemplar, online resources, a white paper and

  1. The two ECRS will develop a simple digital 3D environment prototype which reveals cultural heritage assets, artefacts and landmarks when viewed inside a portable head-mounted display (HMD) or augmented reality HMD.
  2. We will compare the relative strengths and weaknesses of the above HMDs, run an evaluation on test subjects of preferred display, time required to navigate and to wayfind, and record participants’ task performance and memory recall.
  3. We will create a white paper for this, including suggested workflows and appropriate tools.
  4. From the above findings we will provide an online available training course for developing Augmented Reality cultural heritage tours for head Mounted Displays.
  5. There will be a pilot workshop at CURTIN LIBRARY MAKERSPACE

Kinect GUI for Minecraft and others..

In Semester 1 (March to June) and from July Karen Miller of the Library Makerspace and Information Studies and myself are ‘clients’ for Curtin software engineering students. Their brief is to build a flexible Graphic User Interface (GUI) that connects the Microsoft Kinect 1 camera to various game engines like Minecraft so that non-programmers can easily select and modify their own gestures to a command library in the virtual world/game level.

The forerunner of this project coded by Jaiyi Zhu was cited in the NMC Technology Outlook Horizon Report. Dr Andrew Woods, HIVE manager wrote:

Congratulations Karen Miller, Erik Champion and Jaiyi Zhu on having their work cited in the NMC Technology Outlook Horizon Report < https://t.co/YeZMHU76gI >
This project was supported by the 2015 HIVE Summer Internship Program and I’m very happy this great project and Jiayi’s hard work is being acknowledged. https://maker.library.curtin.edu.au/2016/02/19/minecraft-edu-in-the-library-makerspace/

Digital Heritage, Scholarly Making & Experiential Media

Our internal small grant (School of Media Culture and Creative Arts, Curtin University) was successful!

Here is a synopsis of the application (redacted):

Digital Heritage, Scholarly Making & Experiential Media

We propose

  • A one-day workshop [Friday 26 August 2016, HIVE] with 3D, Digital APIs, UNITY and Augmented Reality workshops.
  • We will present our projects at that workshop and a month later meet to review progress and each other’s publications and grants.
  • Then we will organize with the Library and other GLAM partners a cultural hackathon in Perth where programmers and other parties spend a day creating software prototypes based on our ideas from the workshop. The best project will win a prize but the IP will be open source and contestants may be invited into the research projects or related grant applications.
  • Equipment to build prototypes and showcases for future grants. Part of the money will also go into Virtual Reality headsets, and Augmented Reality equipment that can be loaned out from the MCCA store to postgraduates and students.

The above would help progress the below research projects:

  • Another need is to develop the maker-space and digital literacy skills in information studies and the Library Makerspace, to develop a research area in scholarly making.
  • Another project is to integrate archives and records with real-time visualisation such as in the area of digital humanities scholarship, software training in digital humanities, and hands on workshops and crafting projects at the Curtin University Library.
  • Another project is to explore how SCALAR can integrate 3D and Augmented Reality and create a framework for cloud-based media assets that could dynamically relate to an online scholarly publication and whether that journal in printed form, with augmented reality trackers and head mounted displays could create multimedia scholarly journals where the multimedia is dynamically downloaded from the Internet so can be continually updated. Can this work inform future developments of eSPACE and interest in ‘scholarly making’ and makerspaces?
  • There is potential to create an experiential media research cluster with the new staff of SODA, to explore immersive and interactive media that can capture emotions and affects of participants or players. This requires suitable equipment.

Image Recognition / Search Applications

I was asked yesterday if there are applications that can recognise and catalogue images, particularly of buildings.

Years ago I proposed a simpler (ontology reductive) system for tourism so I have an interest in uses for this technology but I am not in this field.
Nevertheless, a quick search found these links. No guarantees but some of the tools look very interesting indeed:

Firstly, an overview:
List of 14+ Image Recognition APIs

Most famous applications that as far as I know only search (and don’t categorise) from your photograph to what may match on the Web:
Google Goggles

More details:
https://en.wikipedia.org/wiki/Google_Goggles
Of course the way forward for entire scenes courtesy of Google + Stanford:
NB https://en.wikipedia.org/wiki/Bing_Vision does not appear suitable here

For general purposes this looks promising

If you wish the app to automatically sort and tag images (so that adatabase can automatically order them) do you have a developer?

Besides http://www.recognize.im/ you can also check out (and thanks to Quora):


NB Diego Jiménez-Badillo, Mario Canul Ku, Salvador Ruíz-Correa, Rogelio Hashimoto-Beltrán have created a 3D version: “A machine learning approach for 3D shape analysis and recognition of archaeological objects

Skyrim on PC with Kinect and Kinect One

FROM: http://projects.ict.usc.edu/mxr/faast/

Have a Kinect for Windows v2?

We have developed an experimental version of FAAST with support for the Kinect for Windows v2, available for download here (64-bit only). Please note that you must already have the Microsoft Kinect SDK v2 installed and the KinectService application running. This is based on preliminary software and/or hardware, subject to change.

FAAST is currently available for Windows only.

Video Demos: http://projects.ict.usc.edu/mxr/faast/faast-video-gallery/
Video of Skyrim and Kinect for PC: https://www.youtube.com/watch?feature=player_embedded&v=Z83wzJwrBK0

From http://www.dwvac.com/

The VAC system is a useful program which you use to issue commands to your flight simulator , role playing game or any program. Since you have your hands full while playing those busy games you can now put your voice to work for you. Use your voice to speak words or phrases to issue commands to you favorite games. VAC uses a unique method in phrase recognition which greatly reduces unwanted issued commands caused by ambient noises.

Alternative camera tracking systems: Asus Xtion Pro and structure.io

Asus have an alternative to Kinect One and Playstation 4 cameras:
http://www.asus.com/Multimedia/Xtion_PRO/

Also for mobile devices especially iPAD
http://structure.io/getstarted

While the Structure Sensor has been carefully designed to work exceptionally well with iOS devices, it has also been designed to work with a wide range of other devices and platforms, including Android devices, and Windows, OS X and Android laptop and desktop PCs.

To use the Structure Sensor with any of these alternative platforms, you will need the USB Hacker Cable as well as OpenNI 2 drivers, which can be downloaded on our OpenNI 2 page. More information on using the Structure Sensor with OpenNI 2 drivers can be found in the Structure Sensor and SDK Forums

OpenNI, not sure if it is actively developed. Refer: http://en.wikipedia.org/wiki/OpenNI

After the acquisition of PrimeSense by Apple, it was announced that the website OpenNI.org would be shut down on April 23, 2014.[7] Immediately after the shutdown, organizations that used OpenNI subsequently preserved documentation and binaries for future use, such as the page at Structure.io

PS4 camera appears to work with Mac OS X

As an alternative to Kinect One (which requires Windows 8) the PS4 eye camera has some interesting features/functions although it is lower res and does not have an Infra Red (IR) blaster..
And can cost $85 AUD or cheaper online or via JB hifi…

How does it compare to Microsoft’s Kinect One camera?
Review in 2013:http://au.ign.com/blogs/finalverdict/2013/11/02/xbox-one-vs-playstation-4-kinect-20-vs-playstation-4-camera
Review in 2014: http://www.techradar.com/au/news/gaming/consoles/ps4-vs-xbox-720-which-is-better-1127315/5#articleContent
A not so positive review: http://www.techradar.com/au/reviews/gaming/gaming-accessories/playstation-4-camera-1202008/review

One possible future use for PS4 eye camera, the VR Project Morpheus: http://www.techradar.com/au/reviews/gaming/project-morpheus-1235379/review

http://www.psdevwiki.com/ps4/PlayStation_4_Camera

Available functions

  • photo, video
  • voice commands (available as well with an earset with microphone)
  • depth calculation/imaging
  • pad, move, face, head and hand recognition/tracking
  • one of the cameras can be used for generating the video image, with the other used for motion tracking.

http://www.psdevwiki.com/ps4/Talk:PlayStation_4_Camera

Features (Could be used for eye tracking..)

  • automatic black level calibration (ABLC) support 2×2 binning
  • programmable controls for frame rate, mirror and flip, standard serial SCCB interface cropping and windowing
  • image quality controls: lens correction and defective pixel canceling two-lane MIPI/LVDS serial output interface
  • embedded 256 bits one-time programmable (OTP)
  • memory for part identification, etc.
  • supports output formats: 8/10/12-bit RAW RGB on-chip phase lock loop (PLL)(MIPI/LVDS)
  • supports horizontal and vertical sub-sampling programmable I/O drive capability
  • supports images sizes: 1280×800, 640×400, and 320×200
  • built-in 1.5V regulator for core
  • support alternate frame HDR / line HDR
  • fast mode switching

What can you do with it?
http://ps4eye.tumblr.com
/
“Bigboss (@psxdev) has successfully streamed video data from the PS4 camera to OS X!”
Picture at https://twitter.com/psxdev/status/439787015606136833

Drivers for PS4
http://bigboss-eyetoy.blogspot.co.uk/2014/09/ps4eyecam-released.html
Links to https://github.com/bigboss-ps3dev/PS4EYECam/
“It is the first public driver for PlayStation 4 Camera licensed under gpl.”

Historical
PS3Eye for Mac: http://webcam-osx.sourceforge.net/