Into the Abyss and beyond: exploring our world

For the last several years, The Abyss Observatory has been a collaborative project formulated by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) and involving the support of a number of organisations including the National Oceanographic and Atmospheric Administration (NOAA), who host the core elements of Abyss Observatory, the Open University in the UK, and the Digital Hollywood University, Tokyo.

At the start of May, Vick Forcella nudged me about Abyss, and the fact that it will be going offline in early June. I actually hopped over to have a look around then, but it has taken me a while to get this article sorted and written. My apologies to Vick and to the organisers of the Abyss that this has been the case.

The Calypso at the Abyss Observatory

The Calypso at the Abyss Observatory

The Observatory grew out of work initially started as the Abyss Museum of Ocean Science, which closed in May 2009, and two follow-on projects. The first of these projects was Vi and Yan’s Undersea Lab, founded by the current co-creator of The Abyss Observatory (August 2009), Yan Lauria from JAMSTEC, and Vianka Scorfield, one of the creators of the exhibits at the Observatory; the second project was the Ocean Observation Museum (November 2009), which saw Rezago Kokorin, one of the creators of the original Abyss museum join the team as co-curator, and Comet Morigi join as Artistic Advisor.

The focus of the Observatory is presenting information on Earth sciences in an immersive, informative manner. As such, it covers multiple levels, extended both under the water and into the skies overhead and is also linked to a number of “external” regions, including a related Earth studies facility located at Farwell.

Finding your way around the facilities can take a little time; I personally recommend starting at the arrival hub, and taking the ground level / underwater walks which can be accessed via the beach, and which will take you under the waves, introducing you to marine life, marine monitoring, conservation and studies.

The Tektite underwater habitat at the Abyss Observatory

The Tektite underwater habitat at the Abyss Observatory

As well as meeting various members of our marine populace, the underwater walks take you through various information areas, with display models, infographics and information boards covering a wide range of subjects, including the unique Tektite Habitat, which in 1969 / 70 was the centre of research into reef ecosystems and human physiology studies related to both saturation diving and possible long-duration space missions.  The Abyss facilities provide an overview of the Tektite studies, together with a cutaway model of the habitat (shown above).

Close to the Tektite habitat, visitors can find models of the bathyscaphe Triseste, which reached a record maximum depth of some 10,911 metres (35,797 ft), in the deepest known part of the ocean,  the Challenger Deep, in the Mariana Trench in the Pacific in 1960. Alongside this sits Jacques Cousteau’s famous yellow underwater “flying saucer”, exploring the deeps while the Calypso is moored nearby. This part of the Observatory also includes a model of Ictineo I, a wooden hulled submarine dating from 1858.

However, the Observatory is not all about ships and submarines – as noted, there is plenty of information on marine life and on marine conservation, and there are skyborne exhibits which offer opportunities to experience very deep sea diving. There’s even the option of relaxing in an underwater bar!

The Abyss Earth studies exhibit at Farwell is entitled Only One Earth, and presents the visitor with a tour of the Earth, starting with a basic introduction to the planet on the lowest level, progressing onwards and upwards through a history of the planet measured by the geological ages, which traces the development of life on Earth. This is a fairly interactive exhibit, with information boards, info givers visitors are encouraged to click on (which display information in local chat), and buttons underneath graphics and images that reveal further information – and of course, links to assorted web pages, as with the main Abyss Observatory displays.

The former climate studies exhibit

The former climate studies exhibit

Unfortunately, as I am getting to this write-up a little late, some of the exhibition spaces created for the Abyss Observatory appear to have already been dismantled. The very excellent climate studies are that was once at Farwell (see above) no longer seems to be available, for example; teleports to it simply return the visitor to ground level.

When visiting the Abyss Observatory, it would be easy to dismiss it as being “old school” – the builds are prim, there is little or no mesh in evidence, etc. It’s also true that some sections of the observatory never seem to have been entirely finished. However, this doesn’t mean that the information which is presented is lacking; there is much on offer here. With a final guided tour of the facilities coming up on Saturday, May 30th at 07:30 PDT, I do recommend that anyone with an interest in marine ecology and / or the history of Earth consider paying the Observatory a visit.

SLurl Links

Oculus VR acquires Surreal Vision, and Connect 2 announced

My colleague Ben Lang, over at Road to VR, brought news my way of the latest acquisition by Oculus Rift, following the company’s formal announcement on May 26th.

Surreal vision is a UK-based company which grew out of Imperial College London, and is at the bleeding edge of computer vision technology. One of the founders is Renato Salas-Moreno, who developed SLAM++ (simultaneous localization and mapping) technology. As Ben explains in the Road to VR blog post:

Using input from a single depth camera, SLAM++ tracks its own position while mapping the environment, and does so while recognizing discrete objects like chairs and tables as being separate from themselves and other geometry like the floor and walls.

SLAM therefore offers the potential to take a physical environment, scanning it, and literally dropping into in a virtual environment and have people interact with the virtual instances of the objects within it.

The other two founders of Surreal Vision are equally notable. Richard Newcombe is the inventor of KinectFusion, DynamicFusion and DTAM (Dense Tracking and Mapping) and worked with Salas-Moreno on SLAM++, while  Steven Lovegrove, co-invented DTAM with Newcombe and authored SplineFusion. All three will apparently be relocating to the Oculus Research facilities in Redmond, Washington.

The acquisition is particularly notable in that it follows-on from Oculus VR acquiring 13th Lab at the end of 2014, another company also working with SLAM capabilities. They were acquired alongside of Nimble VR, a company developing a hand tracking system. However, at the time of those acquisitions, it was unclear what aspects of the work carried out by both companies would be carried forward under the Oculus banner.

Richard Newcombe, Renato Salas-Moreno, and Steven Lovegrove of Surreal Vision (image courtesy of Oculus VR)

Richard Newcombe, Renato Salas-Moreno, and Steven Lovegrove of Surreal Vision (image courtesy of Oculus VR)

Surreal Visions, seem to have been given greater freedom, with the Oculus VR announcement of the acquisition including a statement from the team and their hopes for the future, which  reads in part:

At Surreal Vision, we are overhauling state-of-the-art 3D scene reconstruction algorithms to provide a rich, up-to-date model of everything in the environment including people and their interactions with each other. We’re developing breakthrough techniques to capture, interpret, manage, analyse, and finally reproject in real-time a model of reality back to the user in a way that feels real, creating a new, mixed reality that brings together the virtual and real worlds.

Ultimately, these technologies will lead to VR and AR systems that can be used in any condition, day or night, indoors or outdoors. They will open the door to true telepresence, where people can visit anyone, anywhere.

Connect 2, the Oculus VR conference, is promising to provide

Connect 2, the Oculus VR conference, is promising to provide “everything developers need to know to launch on the Rift and Gear VR”

On May 21st, Oculus VR also confirmed that their 2nd annual Oculus Connect conference – Connect 2 – will take place between September 23rd and September 25th at the Loews Hollywood Hotel in Hollywood, CA.

The conference will feature keynote addresses from Oculus VR’s CEO Brendan Iribe, their Chief Scientist, Michael Abrash, and also from John Carmack, the company’s CTO. It promises to deliver “everything developers need to know to launch on the Rift and Gear VR”. As noted in the media and this blog, the launch of the former is now set for the first quarter of 2016, while it is anticipated that the formal launch of the Oculus-powered Gear VR system from Samsung could occur around October / November 2015.

System specifications for the consumer version of the Oculus Rift were announced on May 15th, and caused some upset / disappointment with the company indicating that the initial release of the headset would be for the Windows environment only – there would not be support for Linux or Mac OS X.

At the time the system specifications were release, Atman Binstock, Chief Architect at Oculus and technical director of the Rift, issued a blog post on the system requirement they day they were announced, in which he explained the Linux / OS X decision thus:

Our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux but we don’t have a timeline.

The Windows specifications were summarised as: NVIDIA GTX 970 / AMD 290 equivalent or greater; Intel i5-4590 equivalent or greater; 8GB+ RAM; compatible HDMI 1.3 video output; 2x USB 3.0 ports; Windows 7 SP1 or later. All of which, Binstock said, to allow the headset to deliver, “to deliver comfortable, sustained presence – a “conversion on contact” experience that can instantly transform the way people think about virtual reality.”

Second Life project updates 22/1; server, viewer, avatar rendering

Stand: Relay D'Alliez

Stand, Relay D’Alliez – Relay for Life Exhibit – blog post

Server Deployments, Week 22

Update, May 28th: a back-end issue with the RC deployment has meant that all three RC channels have been rolled-back to the their previous release, leaving the grid as a whole on the same server release.

As always, please refer to the server deployment thread for the latest updates / news.

There was no deployment to the Main (SLS) channel on Tuesday, May 26th, due to there having been no RC deployment in week #21.

On Wednesday, May 27th, all thee RCs should receive a new server maintenance package, comprising:

  • A change logic on accessing group member lists for large groups
  • Internal server logging changes.

SL Viewer

Due to Monday being Memorial Day in the United States, the Lab was closed for normal office business, and there was no meeting to discuss potential RC viewer promotions. However, the most recent update to the Attachment Fixes RC viewer (Project Big Bird, currently version 3.7.29.301943) is showing a must reduced crash rate compared to the previous release (and which prevented it from being promoted to the de facto release viewer).

The crash rate is still slightly high than for the current release viewer, but speaking at the Simulator User Group meeting on Tuesday, May 26th, Oz Linden described it as “probably not a statistically significant difference”. Whether this means the viewer will be promoted to release status later in the week or not, remains to be seen.

Increasing the Number of Avatars Per Region

Simon Linden: looking at ways an means to make it easier for the simulator and a viewer to better handle large numbers of avatars

Simon Linden: looking at ways an means to make it easier for the simulator and a viewer to better handle large numbers of avatars

“There’s one change that I will follow up on … I added a way so I can adjust the ‘max avatars in a region’ setting.  I’d like to do an experiment soon and see what falls apart if we can get over 100 people into a region,” Simon Linden said at the simulator UG when discussing the upcoming RC deployment.

“This is purely experimental and there are no plans for changing the SL limits,” he went on. “But sometimes regions hit 100 [and] it would be nice if the viewer and simulator handled that better.”

There is already an additional means en route to the viewer by which users can have greater control over how avatars around them in a region are rendered by the viewer, Avatar Complexity, when will draw avatars above a rendering limit set by the user as a solid colour (the so-called “Jelly Baby” avatars).  The will work alongside the existing Avatar Imposters capability already in the viewer.

However, in terms of his experiment, Simon suggested that one way to improve things might be for the viewer to simply not draw everyone within a region; although how this would work, and the criteria used to determine what avatars are drawn and which aren’t, does require careful consideration. Simon suggested the viewer might simply skip drawing those avatars that are furthest away once a threshold number of avatars in the region has been reached. Another (suggest by a meeting attendee) would be for the control to be via the Max Number of Avatars settings within the viewer – so that once exceeded, avatars are again simply not rendered.

As noted, Simon’s work is purely experimental, and primarily aimed at helping the Lab understand what might be done to improve things where there are large gatherings of avatars, and to perhaps try out one or two ideas based on what they learn.

Simon’s Rendering Tricks

As a part of the discussion on avatar rendering, Simon handed out a note card of tips and trick for improving your performance when dealing with complex avatars. While this includes the debugs which will form a part of the new Avatar Complexity functionality, which will be appearing in a a Snowstorm RC viewer soon, as well as suggestions which may already be known, I’m including his suggestions in full here for reference:

From Advanced > Show Debug Settings, set:

  • RenderAutoHideSurfaceAreaLimit   0
  • RenderAutoMuteByteLimit  0
  • RenderAutoMuteFunctions  7
  • RenderAutoMuteLogging  False
  • RenderAutoMuteRenderWeightLimit  350000
  • RenderAutoMuteSurfaceAreaLimit  150

In preferences / graphics, change “Max # of non-imposter avatars” to something like 8. Also try ctrl-alt-shift-4 to hide avatars, or ctrl-alt-shift-2 for alphas.

Note the two debugs shown in green are those related directly to Avatar Complexity and drawing avatars as “Jelly Babies”. Note that RenderAutoMuteFunctions must be set to 7 in order for this to work. Also note that the RenderAutoMuteRenderWeightLimit of 350,000 is purely an advisory starting point. The Lab estimate that this will reduce the very top 3% of very rendering-intensive avatars as solid colours. You may find you have to set the value somewhat lower in certain environments  – such as night clubs and dance venues – in order for it to be effective. I’ve personally found that 150-200K tends to be required in very busy ballrooms, etc.

Urban decay in Second Life

Xin; Inara Pey, May 2015, on Flickr Xin (Flickr)

Xin is the home region for the store of the same name, a place I was drawn to it after seeing images by Goizane Latzo. It is one of a number of regions which have taken a theme of disaster / apocalypse as a theme – perhaps the most notable (in terms of bloggers) being Sera Bellic’s The End of the World As We Know it, which I visited last month, although I’ve yet to blog about it.  

In the case of Xin, designed and built be Alice Pvke (although apparently, ” Jaix helped for like nine seconds”! :) ) it’s unclear as to precisely what has happened; the arriving visitor is presented with a town surrounded by mountains, and which is in a state of ruin. High-rise building stand broken or have toppled over to crash into their neighbours, while down below, the streets are slowly decaying, and the local freeway overpass is in a state of collapse.

Xin; Inara Pey, May 2015, on Flickr Xin (Flickr)

Has there been an earthquake or some other natural disaster? Or is the destruction the result of a war or some other man-made catastrophe? Whatever the cause, it would appear it left the local citizens in a state of turmoil; while the streets are now deserted, there are signs of city-wide violence; vehicle sit riddled with bullets, and even one of the city’s fire trucks appears to have been the target of deliberate assault, its once pristine bodywork battered and dented, its windscreens and side windows smashed-in.

Across town sits an old amusement park, the bumper cars sitting pathetically amidst the ruin of their track, while a once proud Ferris wheel lies broken across the street, its cars sitting in a jumbled heap.

Xin; Inara Pey, May 2015, on Flickr Xin (Flickr)

Everything about this town speaks to a once thriving metropolis; now humanity appears to have fled, and slowly, but surely, nature is gradually reclaiming the neighbourhood. Grass, the seeds of which no doubt carried by the wind, has started to lay claim to the flat roofs of some of the smaller buildings, while vines and creepers climb the sides of others, and to spread themselves along the old power lines that connect some of the skyscrapers. The streets themselves are starting to crack and break-up as roots and grass force their way up through ever-widening gaps in the ageing tarmac.

For those seeking an atmospheric backdrop for photos, Xin might provide a useful option – although admittedly, rezzing is disabled. Those looking for the store should take a look underground near the landing point.

Xin; Inara Pey, May 2015, on Flickr Xin (Flickr)

Additional Links