Philip Rosedale and virtual worlds: “we still don’t get it yet”

As noted by Ciaran Laval, Philip Rosedale appeared at the Gigaom Roadmap event held in San Francisco on November 18th and 19th. He was taking part in a (roughly) 30-minute discussion with Gigaom’s staff writer, Signe Brewster, entitled Designing Virtual Worlds, in which he explores the potential of virtual worlds  when coupled with virtual reality, both in terms of High Fidelity and in general. In doing so, he touches on a number of topics and areas – including Second Life – providing some interesting insights into the technologies we see emerging today, aspects of on-line life that have been mentioned previously in reference to High Fidelity, such as the matter of identity, and what might influence or shape where VR is going.

This is very much a crystal ball type conversation such as the Engadget Expand NY panel discussion Linden Lab’s CEO Ebbe Altberg participated in at the start of November, inasmuch as it is something of an exploration of potential. However, given this is a more focused one-to-one conversation than the Engadget discussion, there is much more meat to be found in the roughly 31-minute long video.

Philip Rosedale in conversation with Gigaom's Signe Brewster

Philip Rosedale in conversation with Gigaom’s Signe Brewster

Unsurprisingly, the initial part of the conversation focuses very much on the Oculus Rift, with Rosedale (also unsurprisingly, as they’re all potentially right) agreeing with the likes of the Engadget panel, Tony Parisi, Brendan Iribe, Mark Zurkerberg et al, that the Oculus Rift / games relationship is just the tip of the iceberg, and there there is so much more to be had that lies well beyond games. Indeed, he goes so far to define the Oculus / games experience as “ephemeral” compared to what might be coming in the future. Given the very nature of games, this is not an unreasonable summation, although his prediction that there will only be “one or two” big game titles for the Rift might upset a few people.

A more interesting part of the discussion revolves around the issue of identity, when encompasses more than one might expect, dealing with both the matter of how we use our own identity as a means of social interaction – through introducing ourselves, defining ourselves, and so on, and also how others actually relate to us, particularly in non-verbal ways (thus overlapping the conversation with non-verbal communications.

Identity is something Rosedale has given opinion on ion the past, notably through his essay on Identity in the Metaverse from March 2014 -  recommended reading to anyone with an interest in the subject. The points raised are much more tightly encapsulated here in terms of how we use our name as a means of greeting, although the idea of of trust as an emerging currency in virtual environments is touched upon: just as in the physical world, we need to have the means to apply checks and balances to how much we reveal about ourselves to others on meeting them.

Can the facial expressions we use, exaggerated or otherwise, when talking with others be as much a part of out identity as our looks?

Can the facial expressions we use, exaggerated or otherwise, when talking with others be as much a part of out identity as our looks?

The overlap between identity and communication is graphically demonstrated in Rosedale’s relating of an experiment carried out at High Fidelity. This saw several members of the HiFi team talking on a subject, a 3D camera being used to capture their facial expressions and gestures, recording them against the same “default” HiFi avatar.  When a recording of the avatar was selected at random and played by to HiFi staff sans any audio, they were still very quickly able to identify who the avatar represented, purely by a subconscious recognition of the way facial expression and any visible gestures were used.

This is actually a very important aspect when it comes to the idea of trust as virtual “currency”, as well as demonstrating how much more we may rely on non-verbal communication cues than we might otherwise realise. If we are able to identify people we know – as friends, as work colleagues, business associates, etc. – through such non-verbal behavioural prompts and cues, then establishing trust with others within a virtual medium which allows such non-verbal prompts to be accurately transmitted, can only more rapidly establish that exchange of trust, allowing for much more rapid progression into other areas of interaction  and exchange.

Interaction and exchange also feature more broadly in the conversation. There is, for example the difference in the forms of interaction which take place within a video game and those we’re likely to encounter in a virtual space. Those used in games tend to be limited to what is required in the game itself – such as shooting a gun or running.

If 3D spaces can be made to operate as naturally as we function in the real world - such as when handing some something, as Mr. Rosedale is miming, might they become a more natural extension of our lives?

If 3D spaces can be made to operate as naturally as we function in the real world – such as when handing some something, as Mr. Rosedale is miming, might they become a more natural extension of our lives?

Obviously, interactions and exchanges in the physical world go well beyond this, and finding a means by which natural actions, such as the simple act of shaking hands or passing a document or file to another person can be either replaced by a recognisable virtual response, or replicated through a more natural approach than opening windows, selecting files, etc., is, Rosedale believes, potentially going to be key to a wider acceptance of VR and simulated environments in everyday life.

There’s a certain amount of truth in this, hence the high degree of R&D going on with input devices from gesture-based tools such as Leap Motion or haptic gloves or some other device. But at the same time, the mouse / trackpad / mouse aren’t going to go away overnight. There are still and essential part of our interactions with the laptops in front of us for carrying out a ranges of tasks that also aren’t going to vanish with the arrival and growth of VR. So any new tool may well have to be as easy and convenient to use as opening up a laptop and then starting to type.

Drawing an interesting, on a number of levels, comparison between the rise of the CD ROM and the impact of the Internet’s arrival, Rosedale suggests that really, we have no idea where virtual worlds might lead us simply because, as he points out, even now “we don’t get it yet”. The reality is that the potential for virtual spaces is so vast, it is easy to focus on X and Y and predict what’s going to happen, only to have Z arrive around the same time and completely alter perceptions and opportunities.

There are some things within the conversation that go unchallenged. For example, talking about wandering into a coffee shop, opening your laptop and then conducting business in a virtual space is expressed as a natural given. But really, even with the projected convenience of use, is this something people will readily accept? Will they want to be sitting at a table, waving hands around, staring intently into camera and sharing their business with the rest of the coffee shop in a manner that potentially goes beyond wibbling loudly and obnoxiously  over a mobile phone? Will people want to do business against the clatter and noise and distractions of an entire coffee shop coming over their speakers / headphones from “the other end”? Will we want to be seated next to someone on the train who is given to waving arms and hands, presenting  corner-eye distraction that goes beyond that encountered were they to simply open a laptop and type quietly? Or will we all simply shrug and do our best to ignore it, as we do with the mobile ‘phone wibblers of today.

That said, there is much that is covered with the discussion from what;’s bean learnt from the development of Second Life through to the influence of science-fiction on the entire VR/VW medium, with further focus on identity through the way people invest themselves in their avatar in between, until we arrive at the uncanny valley, and a potential means of crossing it: facial hair! As such, the video is a more than worthwhile listen, and I challenge anyone not to give Mr. Rosedale a sly smile of admiration as he slips-in a final mention of HiFi is such a way as to get the inquisitive twitching their whiskers and pulling-up the HiFi site in their browser to find out more.

Transcending Borders: the Grand Finale and L$115,000 audience participation prizes reminder

The UWA-BOSL Ampitheatre

The UWA-BOSL Grand Amphitheatre

The Grand Finale of the University of Western Australia’s Transcending Borders 3D art and machinima challenge will take place on Sunday, December 6th, 2014 – and you can be a part of the audience.

The event will be held at the UWA-BOSL Grand Amphitheatre, starting at 06:00 SLT on Sunday, December 6th, when the winners, as determined by the judges, in both the 3D art and the machinima categories will be announced, and where all 39 machinima entries will be shown continuously in the 24 hours leading up to the event.

The challenge presented by Transcending Borders has been for entrants to interpret the title of the competition in any fashion they deem applicable, and produce a 3D artwork (in no more than 150 prims) or short film based on their interpretation. So the title might be seen as transcending borders between space and time, or the past and present or the present and future; it might be interpreted as divisions between dimensions, real and virtual; or borders separating nations or cultures or languages; it might even combine several of or all such idea, or something else entirely – such as the many borders we encounter as we navigate our physical and virtual lives.

 by Giovanna Cercise

Saudade by Giovanna Cercise

And there is a total prize pool of L$115,000 (L$57,500 in each of the two categories) on offer to those members of the audience who participate in a special additional competition.

All you have to do is visit the art exhibits on display at the UWA gallery area and / or watch the machinima entries and then submit a list of the entries you think will finish in the TOP 10 in order 1st – 10th as decided by the official judging panel.

  • Entries should be submitted by e-mail to or via note card submitted to Jayjay Zifanwe in-world
  • All entries should include your name, and be titled either “Transcending Borders 3D Art Audience Event” or “MachinimUWA VII Audience Event”, according to the category being entered
  • You can enter the art participation event or the machinima participation event or both (make sure you submit one “top ten” list for each category in the case of the latter).

The 5 participants in each category whose list comes closest to the final order decided by the judging panel, will win for themselves between L$20,000 – L$5,000. In addition, the first prize winner will also be invited to be on the panel for the next grand art challenge and ALL 5 will receive in the mail a special RL pack.

Simply follow the links above or at the end of this article to view the art entries and watch the films. Entries for both of the audience participation events should be received no later than Midnight SLT on Wednesday, December 3rd, 2014.

Remember, this is not a popularity vote. Your top 10 entry / entries should be your prediction of who the actual top 10 will be according to the official judging panel. Good luck to all those who enter – and I’ll see you at the Grand Finale event!

Inside the UWA-BOSL Ampitheatre

Inside the UWA-BOSL Ampitheatre

Related Links

Note: the art image included in this item should not be taken as any indication of my personal preference as a member of the  Transcending Borders jury. It is included purely for the purposes of illustrating the article.

SL project updates week 47/1: server, viewer, RenderAutoMute functions

Nordan om Jorden; Inara Pey, November 2014, on FlickrNordan om Jorden (Flickr) – blog post

My apologies for the lateness of this update; have been busy with a variety of things, both SL and non-SL.

Server News – Week 47

There are no server deployments during the week, due to the hardware inspections taking place, which involved restarts taking place from Monday, November 17th through Friday November 21st, as detailed in the Grid Status updates (as a reminder, there is an additional period of maintenance due on Wednesday, November 18th, commencing at 13:00 SLT).

According to Simon Linden, speaking at the Simulator User Group on Tuesday, November 18th, the work on the servers requires each box being taken down, opened-up and physically inspected, and parts (unspecified) possibly being swapped-out.

Exactly what amount of work (if anything) is required on each server may vary, making the process something of a piece of string when it comes to how long it will take per box.

What prompted the work isn’t clear, but there was muted speculation that some servers may need a physical update of some description to avoid the potential of failing due to a defect. Whether this means one of the Lab’s suppliers altered them to a problem or not or something else has come up, isn’t clear.

However, none of this work should, outside of the rolling restarts, affect the performance of any regions.

SL Viewer

HTTP Pipelining Viewer

A new HTTP Pipelining RC viewer appeared on Monday, November 17th. Version, see a “reduced pipelined texture and mesh fetching time-out so that stalled connections fail quickly allowing earlier retry. Time-out value changed from 150 seconds to 60 seconds.”

It is hoped that this viewer fixes the following issues:

  • BUG-7686 – “Avatars are not coming on viewer”
  • BUG-7687 – “Nothing is rezzing in SL,, av’s are all gray and textures will not rez”
  • BUG-7688 – “Since the last restarts I cant seen to see things I rez from my inventory or wear mesh in my inventory. I have done numerous clean installs of the latest LL viewer. I have also made sure I am not running the beta version of the AMD CCC”
  • BUG-7690 – “Textures and Meshes abruptly stopped rendering”
  • BUG-7691 – “Won’t rez properly”
  • BUG-7694 – “Textures and meshes loading slow or refusing to load”
  • BUG-7698 – “Textures much slower to load on a CDN region then on a clone of the same region not running CDN”

See the release notes for further details.

Maintenance RC Viewer

There are reports that the Maintenance RC viewer currently in the viewer release pipeline (version contains a number of regressions for joint position bugs. These issues are apparently known by the Lab, which hopes to have them corrected before the code merges with anything else.

Group Chat

There have been various reports of further group chat issues doing the rounds. On Monday, November 17th, for example, it was noted through Firestorm Support chat that at least two group chat server were down. Asked about this during the Simulator User Group meeting, Simon Linden replied:

Yes some of the chat servers have been having troubles in the last few days.   I’ve been looking into that … the code running there isn’t super new, and the outages might be timed with some of those restarts. In any case, there is an update soon for the chat servers, and already another in the pipeline.

Experience Keys

No major news here, other than those trying the Experience Keys in the current beta are being urged to file any additional issues they may have found as BUG reports as soon as they can. Simon Linden added to the request, “we’re trying to finish off the last few issues and have that Real Soon Now (sometime in the future, no promises).”

As previously indicated in my coverage of Experience Keys, the first release of the capabilities will not allow for grid-wide experiences, although this is something that is on the Lab’s list. Commenting on the plans, Oz Linden said:

The first general release of experiences won’t include being able to get a grid-wide key. It’s not so much that as there are more issues for us to deal with for grid-wide experiences, and we don’t want to make people wait for the ones …. Being able to do grid-wide experiences isn’t going to fall of the to-do list or anything.

As Oz was speaking, Simon Linden added:

We’ve talked about it before, and having a widely available data system like the key-value store would be really great, but there are a ton of issues with that being available, scaling it to cover the full SL population and all that. So we’re going ahead with a more feasible sized feature set.

RenderAutoMute Functions

One of the heaviest impacts on viewer performance comes not from issues with the SL servers or in rendering the contents of the the region you’re in, but from avatars themselves, particularly in crowded or busy regions. The Avatar Imposters option within the view can help with this, however, the Lab is looking to bring a debug setting to the fore within the viewer’s UI to further help users control their viewer’s performance.

The setting in question is RENDERAUTOMUTERENDERWEIGHTLIMIT, which is somewhat tied to the Avatar Render Weight (once aka Avatar Render Cost), the colour-coded render value assigned to avatars which can be displayed over their heads via the Advanced menu (CTRL-ALT-D to enable if not visible): Advanced > Performance Tools > Show Draw Weight for Avatars.

Essentially, the idea is that by entering a value against this setting, you can define a limit above which the viewer will cease rendering avatars fully, and instead will render them as a sold colour imposter, regardless as to how near / far they are from your point-of-view, reducing the rendering load on the viewer / your computer.

Currently, you can use the RENDERAUUTOMUTERENDERWEIGHTLIMIT option within the viewer to set a limit on rendering high-ARW avatars as solid colours in your viewer. You'll need to have RENDERAUTOMUTEFUNCTIONS set to 7 for it to work smoothly, and should note

Currently, you can use the RENDERAUUTOMUTERENDERWEIGHTLIMIT option within the viewer to set a limit on rendering high-ARW avatars as solid colours in your viewer. Note how the debug setting doesn’t correlate with the ARW for the avatar – something that will be fixed when the setting becomes a UI option (which will also see the dependency on setting RENDERAUTOMUTEFUNCTIONS removed – see the notes below)

I used the term “somewhat tied” above, because there is currently no obvious correlation between a number set within the debug setting and Avatar Render Weight, which is a figure that is in turn impacted. A further problem with the setting as it currently stands is that it is actually calculated by multiplying the number you enter against RENDERAUTOMUTERENDERWEIGHTLIMIT by a certain LOD (level of detail) factor (so if you set RENDERAUTOMUTERENDERWEIGHTLIMIT to 60,000, the actual figure being used might be 92,000 – 60K x the LOD factor).

Both of these points of confusion will be addressed by the Lab in making the option directly available through the viewer UI, so that there is a much clearer and obvious correlation between the setting and ARW.  Oz Linden is also working on colour-coding the resultant solid avatars so that it is possible to determine those avatars which are just over any limit set in the viewer and those which are well over the limit, allowing users to further fine-tune their settings according to needs / circumstance.

The two debug settings: you'll need to set RENDERAUTOMUTEFUNCTIONS to 7, and then experiment with RENDERAUTOMUTERENDERWEIGHTLIMIT

The two debug settings: you’ll need to set RENDERAUTOMUTEFUNCTIONS to 7 in order to experiment with RENDERAUTOMUTERENDERWEIGHTLIMIT

The option can actually be experimented with at the moment, although it currently has a dependency on another debug setting – RENDERAUTOMUTEFUNCTIONS - which must be set to 7 in order for any of the RenderAutoMute functions (5 in all) to work. Again, the Lab indicate that this dependency will be removed when RENDERAUTOMUTERENDERWEIGHTLIMIT becomes a UI option.

Again, the emphasis is on “experiment”, simply because of the lack of a direct correlation between values entered into the debug setting and the ARW values of surrounding avatars.  However, if you do want to have a play with the setting as it is at the moment, Oz Linden suggests starting with a value of around 60,000 and working up or down from there, depending on your needs / circumstances.

There’s no time frame as to when this work may find its way into a viewer, but Oz is actively working on it, following a prompt from third-party contributor, Jonathan Yap.

A rebuttal to one-dimensional writing

Sarawak by Loverdag on Flickr, one of the images used in my rebuttal to Marlon McDonald's article on SLSarawak by Loverdag on Flickr, one of the images used in my rebuttal to Marlon McDonald’s article on SL

On Friday, November 14th, erstwhile contributor to Moviepilot,com Marlon McDonald wrote an article about Second Life which, is to say the least, predictably one-dimensional.

The item in question, entitled These Strange Stories Prove Second Life Isn’t The Dreamworld You Believed… takes as its rather predictable focus, the subject of pornography in Second Life. It’s lead to a fair level of upset among SL users – and rightly so; Mr. McDonald goes to considerable lengths to make his case by apparently passing on the opportunity to try the platform for himself, and instead dig through Google searches for articles that are anything up to seven years old (and none more recently written than three years ago).

Marlon McDonald: one-dimensional article

Marlon McDonald: one-dimensional article

There is much that is wrong with the piece; not only does it present a one-side view of SL, it’s clearly intended as clickbait – if not for directly (although it doesn’t hurt them!), then certainly for Mr. McDonald himself, a regular contributor there, Most of what is wrong is easy to spot and cane be said through a comment on the piece. However, I opted to present a more direct rebuttal to the article through Moviepilot’s own pages, in the hopes of also reaching Mr. McDonald’s intended audience and perhaps persuading them to look on SL differently.

You can read the article over on

I don’t usually ask for page views – but in this case, I am. Not for myself, but to help the article get right up there alongside Mr. McDonald’s piece and truly give Moviepilot users an alternative point of view on SL. So please, if you wouldn’t mind, follow the link and have a read, Or if you’re tired of my writing – just follow the link and go make yourself a cup of tea / coffee!