Category Archives: Other Worlds

Rock-paper-scissors at HiFi, with thanks to SL’s Strachan Ofarrel!

HF-logoDan Hope over at High Fidelity has provided  a light-hearted blog post on using the Leap Motion gesture device with the High Fidelity Alpha.

The blog post includes a video showing Chris Collins and Ozam Serim in-world in High Fidelity playing a game of rock-paper-scissors. The intention is to provide something of an update on integrating Leap Motion with High Fidelity.

Both Chris and Ozan’s avatars have intentionally-oversized hands, which although they look silly / awkward, help emphasise the  dexterity available in the High Fidelity avatar. Not only can avatars mimic user’s gestures, they can mimic  individual finger movements as well (something Dan has shown previously in still images).

Dan also points out the work to integrate Leap Motion hasn’t been done internally, but has  been a contribution from CtrlAltDavid – better known in Second Life as Strachan Ofarrel (aka Dave Rowe), the man behind the CtrlAltStudio viewer. As such, Dan points to it being an example of the High Fidelity Worklist being put to good use – although I say it’s more a demonstration of  Dave’s work in getting new technology into virtual environments :).

A lot of people have been fiddling with Leap Motion – including fixing it to the front of an Oculus Rift headset (as noted in the HiFi blog post) in order to make better use of it in immersive environments.Having it fixed to an Oculus, makes it easier for the Leap Motion to capture gestures – all you need to do is hold your hands up in your approximate field-of-view, rather than having to worry about where the Leap is on your desk.

Mounting the Leap motion to the front of Oculus Rift headsets is seen as one way to more accurately translate hand movements and gestures into a virtual environment. Perhaps so - but a lot of people remain unconvinced with gesture devices as they are today

Mounting the Leap motion to the front of Oculus Rift headsets is seen as one way to more accurately translate hand movements and gestures into a virtual environment. Perhaps so – but a lot of people remain unconvinced about using gesture devices as we have them today

Away from the ubiquitous Oculus Rift, Simon Linden did some initial experiments with Leap Motion with Second Life in early 2013, and Drax also tried it out with some basic gesture integration using GameWAVE software, however the lack of accuracy with the earlier Leap Motion devices didn’t easily lend their use to the platform, which is why more recent attempts at integration didn’t really get off the ground. However, Leap Motion have been working to improve things.

That said, not everyone is convinced as to the suitability of such gesture devices when compared to more tactile input systems such as haptic gloves, which have the benefit of providing levels of feedback on things (so when you pick a cube up in-world, you can “feel” it between your fingers, for example). Leap certainly appears to suffer from some lack of accuracy  – but it is apparently getting better.

Given a choice, I’d probably go the haptic glove + gesture route, just because it does seem more practical and assured when it comes to direct interactions. Nevertheless, it’s interesting to see how experiments like this are progressing, particularly given the Lab’s own attempts to make the abstraction layer for input devices as open as possible on their next generation platform, in order to embrace devices such as the Leap Motion.

Related Links

2014 Opensimulator Community Conference: tune-in

A fascinating Gource visualisation posted by nebadon2025 charting the growth of the OpenSimulator project by code commits from core developers up until the time of the 2014 conference

Saturday, November 8th, and Sunday, November 9th mark the 2014 OpenSimulator Community Conference, which is being jointly run by AvaCon and the Overte Foundation. The weekend promises to be packed with talks, presentations, workshops and more; and while in-world registrations have sold out, it is not too late to register for the livestream broadcasts of the conference events.

The full programme can be found on the conference website, however, the keynote events comprise:

Saturday, November 8th, 07:30 SLT – OpenSimulator Developer Panel: featuring: Mic Bowman, Planning Committee, Intel Labs; Michael Cerqoni; Justin Clark-Casey, Overte Foundation; James Hughes, Founder, BlueWall Information Technologies, LLC; Oren Hurvitz, Co-Founder and VP R&D of Kitely; Crista Lopes, Overte Foundation and the University of California, Irvine; and Melanie Milland, Planning Committee, Avination. Together they will discuss  the future of the OpenSimulator platform, covering a range of issues including: the future of the Hypergrid, content licensing and permissions, scalability, project maturity, and more.

Saturday, November 8th, Noon SLT – Philip Rosedale: “How will we build an open platform for VR over the internet?”  a presentation exploring the future of the Metaverse and the challenges that lie ahead.

Sunday, November 9th, 07:30 SLT – Dr. Steve LaValle: “Virtual Reality. How real should it be?” Although VR has been researched for decades, many new challenges arise because of the ever-changing technology and the rising demand for new kinds of VR content.  This talk will highlight some of the ongoing technical challenges, including game development, user interfaces, perceptual psychology, and accurate head tracking.

The OSCC conference centre from the inaugrual 2013 conference

The OSCC conference centre from the inaugural 2013 conference

The conference website also lists all of the speakers attending the event, who will be participating in the keynote events and in the various conference tracks which will be running throughout the weekend:

  • The Business & Enterprise track will feature sessions that cover a broad range of uses related to doing business in and with OpenSimulator, such as those by grid hosts, third-party developers, private entrepreneurs, in-world and enterprise businesses, as well as corporations and organizations using OpenSimulator for marketing, fundraising, product research, focus groups, and more.
  • The Content & Community Track will feature sessions about all of the wonderful things that happen in-world. Building and content creation includes large-scale immersive art installations, ballet, theatre, performance art, machinima, literary arts, clothing designs, virtual fashions, architecture, music performances and other cultural expressions.  There are also communities for nearly every interest, including role-playing groups, science fiction communities, virtual towns and interest groups, historical explorations, religious and spiritual communities, book clubs, and so much more.
  • The Developers & Open Source track will cover the technical side of OpenSimulator, encompassing servers, viewers, external components, grid architecture, development, administration – anything that is necessary for the installation, operation and use of an OpenSimulator system.
  • The Research and Education Track will explore the ways in which OpenSimulator has become a platform for computationally understanding complex problems, characterizing personal interactions, and conveying information. This track seeks presentations regarding OpenSimulator use towards research applications in computer science, engineering, data visualization, ethnography, psychology, and economics. It will additionally feature sessions that cover a broad range of uses related to teaching and learning in and with OpenSimulator.
  • The Learning Lab will provide conference attendees the opportunity to explore and practice their virtual world skills, share their best OpenSimulator strategies, and experiment and discover diverse ways to use OpenSimulator to support creativity, knowledge production and self-expression. If you are a gamer or game enthusiast, this is the track for you! The Learning Lab features interactive sessions where attendees get to practice and apply skills hands-on, either in design or to play a game.

All of the event tracks are colour-code within the main programme guide, and their respective pages on the conference website include their livestream feeds for those who are watching events.

OSCC-6There will also be a number of social events taking pace during the conference and, for those of a daring disposition, the OpenMeta Quest: “Your mission, should you be brave enough to accept it, is to find 12 hexagon-shaped game tokens across 7 sims while matching your MetaKnowledge for prizes. Look for the Adventure Hippo to begin your journey.”

For those who have registered to attend the conference in-world, don’t forget you can find your way there via the log-in information page. When doing so, do not that the organisers recommend not using the OSCC viewer which was made available for the inaugural conference in 2013. Singularity is the recommended viewer for this year’s conference.

As well as the conference venue, the OSCC Grid includes a number of Expo Zone regions, featuring conference sponsors and community crowdfunder exhibits; a  Shopping Centre region; exhibits created by speakers in the Content & Community, Research & Education, and Learning Lab tracks.

All told, this packed weekend should be informative, fun and educational.

2014 banner

About the Organisers

The Overte Foundation is a non-profit organization that manages contribution agreements for the OpenSimulator project.  In the future, it will also act to promote and support both OpenSimulator and the wider open-source 3D virtual environment ecosystem.

AvaCon, Inc. is a 501(c)(3) non-profit organization dedicated to promoting the growth, enhancement, and development of the metaverse, virtual worlds, augmented reality, and 3D immersive and virtual spaces. We hold conventions and meetings to promote educational and scientific inquiry into these spaces, and to support organized fan activities, including performances, lectures, art, music, machinima, and much more. Our primary goal is to connect and support the diverse communities and practitioners involved in co-creating and using virtual worlds, and to educate the public and our constituents about the emerging ecosystem of technologies broadly known as the metaverse.

 Related links

High Fidelity launches documentation resource

HF-logoHigh Fidelity have opens the doors on their new documentation resource, which is intended to be a living resource for all things HiFi, and to which users involved in the current Alpha programme are invited to contribute and help maintain in order to see it develop and grow.

Introducing the new resource via a blog post, Dan Hope from High Fidelity states:

This section of our site covers everything from how to use Interface, to technical information about the underlying code and how to make scripts for it. We envision this as being the one-stop resource for everything HiFi.

What’s more, we want you to be a part of it. We’ve opened up Documentation to anyone who wants to contribute. The more the merrier. Or at least, the more the comprehensive … er. And accurater? Whatever, we’re better at software than pithy catchphrases. Basically, we think that the smart people out there are great at filling in holes we haven’t even noticed yet and lending their own experience to this knowledge base, which will eventually benefit everyone who wants to use it.

Already the wiki-style documentation area contains a general introduction and notes on documentation standards and contributions, a section to the HiFi coding standard; information on avatar standards, including use of mesh, the skeleton, rigging, etc; information on various APIs, a range of tutorials (such as how to build your avatar from MyAvatar), and client build instructions for both OS X and Windows.

The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar "default" (top); and also includes sections on standards, such as (bottom)

The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar “default” (top); and also includes a section on avatar standards, which includes information on the avatar construction, the skeleton, joint orients, rigging, etc. (bottom) – click for sull size

All told, it makes for an interesting resource, and Dan’s blog post covers the fact that the documentation project is also linked to the HiFi Worklist, allowing those who prefer not to write documentation to highlight areas of improvement / clarification or which need writing to those who enjoy contributing documentation, and being rewarded for their efforts.

As well as the link from the blog post, the documentation resource can be accessed from the High Fidelity website menu bar – so if you’re playing with HiFi, why not check it out?

Related Links

With thanks to Indigo Mertel for the pointer.

 

Return to Blue Mars

The Amida Hall of the Byōdō-in Temple, Uji in Kyoto Prefecture, Japan, as recreated in Blue Mars by IDIA Labs

The Amida Hall of the Byōdō-in Temple, Uji in Kyoto Prefecture, Japan, as recreated in Blue Mars by IDIA Labs (click any image for full size)

Remember Blue Mars, the  mesh-based virtual world which arrived in open beta in 2009? Despite initially high hopes, it struggled to find an audience, either among general users or those of us familiar with the more free-form sandbox environments provided by the likes of SL. At its peak in 2010, it had attracted some 50,000 registrations , but only around one-tenth of that number were reportedly actually using the platform.

The statue of Buddha in the Amida Hall

The statue of Buddha in the Amida Hall

By January 2011, Avatar Reality, the company behind the platform, had reduced staffing by two-thirds, to just 10 people, before opting to try the mobile route with an iOS app, and then pinning their hopes on a “Lite” version for the PC and Mac which offered  users a “mixed reality” chatroom tool  utilising Google Street View. Neither of these really worked out, and in 2012, Avatar Reality granted expanded rights to the Blue Mars technology, valued at $10 million in research and development, to Ball State University for 3-D simulation and research projects outside of gaming applications.

For most people, that seemed to be the end for Blue Mars – but that isn’t actually the case. Since 2012, the Institute for Digital Intermedia Arts (IDIA) Laboratories at Ball State University has undertaken a number of projects utilising the platform for a variety of educational, media and research activities as a part of their  Hybrid Design Technologies initiative.

This work has been a natural outgrowth of IDIA’s early use of Blue Mars to create the Virtual Middletown Project, a simulation of the Ball Glass factory from early 20th century Muncie, Indiana. The factory and its personnel were key factors in studies carried out by Robert and Helen Merrell in the late 1930s, which became classic sociological studies, establishing the community as a barometer of social trends in the United States.

Today, the Virtual Middletown Project remains a part of Blue Mars, accessible to anyone with the original Blue Mars Windows client, as is IDIA’s other major early Blue Mars project, a reconstruction of the 1915 World’s Fair in San Francisco. In addition, a number of more recent historical and educational projects have been created for a range of purposes, and these all sit alongside some of the surviving original “city” builds from Blue Mars, all of which are also open to exploration by the curious.

My own curiosity about the status of Blue Mars was rekindled in early 2014, when I caught a re-run of the BBC’s The Sky At Night, which examined the ancient monument of Stonehenge as a place for prehistoric solar and lunar studies (potentially up to and including predicting eclipses. The programme featured models of Stonehenge constructed in Blue Mars by IDIA Labs in 2013, and which were subsequently used in programmes for the History Channel as well.

Stonehenge in Blue Mars during the 2014 summer soltice. The model can also be viewed from the persepective of 2700 BC and in a range of lighting conditions

Stonehenge in Blue Mars during the 2014 summer solstice.

As well as Stonehenge, Middletown and the 1915 World’s Fair, the existing IDIA catalogue includes models of Edo from the 1700s, the Mayan city of Chichen Itza; the pre-Columbian archaeological site of Izapa; Kitty Hawk, where the Wright Brothers experimented with powered flight; the Giza Necropolis, the Apollo 15 landing site on Hadley rille,  and so on.

All of the builds are fairly static in nature, although they can be explored, and some offer various levels of interaction, which itself comes in a variety of forms. In Edo, for example, there are various items asking visitors to CLICK ME, in order to reveal additional information within the client; elsewhere, such as in the art gallery, clicking on the displayed pictures takes you to an associated web or wiki page; elsewhere still, “transport spheres” offer the opportunity to “jump into” real-world images of the place you’re visiting.

In addition, all of the builds offered by IDIA Lab feature a HUD system, located in the bottom right corner of the screen, which in turn offers differing options, depending on the model, which may range from a pop-up, browser-like panel offering further information on the location being visited, or which may also include opportunities for setting different lighting conditions, time of day, or even views of the location, based on different dates in history.

The winter solstice, Stonehenge, 2700 BC. Note the Map buttons, lower right, which provide access to additional options and resources

The winter solstice, Stonehenge, circa 2700 BC. Note the HUD buttons, lower right, which provide access to additional options and resources

Continue reading