GDC2017, Part 3
Game Developers Conference 2017 (GDC) in San Francisco may be in the rearview mirror, but many of the technologies demonstrated and discussed at the show will prove to be the shape of things to come. GDC plays annual prologue to May’s E3 Expo, which is practically double in size. While E3 is a larger show, it is more a platform for game companies to show off their wares for industry and general media. GDC remains the show for developers to show off their tools and their technical chops for the benefit of other developers.
Thus, our third installment of GDC 2017 coverage peeks into some of the emerging technology driving game development, and serves as a preview of what you may see as you head to E3.
Facial Capture
There were an array of facial capture technologies. Some are quite sophisticated, like Faceware’s headcams and software to rig your face and its motions in 3D. Others like MyDidimo (above), available for WebGL, Unreal, Unity, and Amazon’s Lumberyard, use a single wrap-around facial picture to digitize your image onto a virtual character.
Text-to-Speech
Speaking of Amazon Lumberyard, at their booth they hooked up their game engine to the still-relatively-new Polly Text-to-Speech (TTS) system to demo a 3D avatar named Rin. Polly, announced at AWS re:Invent in November 2016, can speak in 47 voices and 24 languages.
There’s a lot of back-end work to create a complete experience, including Alexa NLP, Amazon Lex, and even Amazon Rekognition, to understand a user’s voice, text or even image-based input, and then reply using Polly, as Geekwire outlined. For gaming, use cases include embedded interactive help systems, and more dynamic NPC quest givers and companions. In the demo, the avatar’s responses were controlled by Twitch chat. It’s also not quite ready to be seamlessly hooked up to your favorite translation management system just yet.
Facial capture is usually accompanied by voice dubbing. Yet voice dubbing, due to its expense, is often reserved for main characters or scripted plots. TTS allows for procedurally-generated response, obviating having to pre-record every specific possibility beforehand.
Procedural Everything
Last year, No Man’s Sky received a great deal of media attention for propagating a galaxy comprised of procedurally-generated creatures, landscapes and even whole planets. The hype for the game may have been overblown, but the concepts and methods of procedural generation of elements are quite solid and have been used in gaming, animation and other computer science domains to great success. Minecraft also creates procedurally-generated worlds. And platformers have been using procedural generation for levels even longer.
It should come as no surprise the gaming industry uses procedural generation at the heart of many tools and tricks of the trade. For instance, SideFX’s Houdini. In a demo of creating dynamic procedurally generated elements, Luis Eduardo Garcia Anaya (Lune) of Feline Arts showed off the tools and methods he is using to build his new game Suki & the Shadow Claw.
Complex constructions — assets — no longer have to be designed separately. You now create a single standardized object, such as an island or a rope bridge, which can be spawned and tailored. For the rope bridge, the system itself adds more planks and ropes for the bridge (or removes them), makes the bridge more tense and flat (or loose and saggy), or adds or removes gaps in the planking based on set parameters (see below).
These changes are reflected directly in the level design phase. Using Houdini, you can alter the shape of the bridge to tailor a leap or jump, and test it right there and then. If you messed up, you can alter the parameters so you know your level design is accomplishable by players long before sending it over to playtest and find out you missed correctly gauging distances or slopes.
Aside from GDC, speaking of “procedural everything,” even dialogue itself can be procedurally driven given tools like Rant. There’s even a case to be made for procedurally generated lore. Of course, this leads to issues of translation. If you do have procedurally-generated content, what do you need to do during the coding/testing process to ensure analogous translated versions of writing or dialogue makes sense?
Collaboration
Once you get to a certain size team, you need to look at how you’re going to work with everyone on the same code base. Even on the same environment or setting. In the old days, you might simply assign a different level or zone to a different developer. But then you’d end up with a guy who was great at architecture, but terrible at trees. They’d do cities well, but their countrysides (or city parks) weren’t up to snuff. Or vice versa, so you had great forests, but architecture looking like it was built out of pudding.
If you did want to involve a great architectural designer, and a fantastic tree designer, and other diverse experts on the same setting design, then the problem you have is how to have them pull their own branches, make changes, and then try to make it all make sense when merging back their commits.
Collaboration is a major challenge still being addressed by the tool publishers themselves. Unity’s own efforts, demoed at Unity 16 last December, show that this sort of scaling of effort remains top-of-mind for developers, and an opportunity for solutions.
Hence Scene Fusion from Kinematic Soup. This is a new Unity-based tool that seeks to create a new collaborative environment for setting development. Now you can have your whole shop working side-by-side at the same time. Or, in a classroom setting, work as a project team on the same environment.
There are still roadmap items they need to take care of — for instance, locking down some elements while the rest of the setting can still be edited. (Example: “Don’t mess with the cathedral any more, but the garden needs some major work.”) But for now, this type of tool will definitely get your art team, or your art class, on the same page.
Straight to the Source Code
If you got deep into the Unity collaboration video above, you’d have heard the burning question in the back of many developer’s minds: “how do I control these changes through Perforce?” For game developers, Perforce is the preferred source code management (SCM) system. It isn’t the only SCM system used in the gaming industry by far; you also have GitHub, Plastic, and so on. But it definitely has an outsized proportion of representation of big studios, such as EA Games, Ubisoft, Nintendo, CCP and NCsoft.
A lot of these tools and systems are neat, but at the end of the day, they have to tie back into the source code. Branches and pulls. Merges and commits. Ideas have to be made operational, not just on initial commitment, but with all revisions and incremental updates made thereafter.
These complexities are not lost upon the managers of massive production games with global audiences and frequent code releases. And this is where we found a lot of discussion with customers at GDC 2017. Not just managing their art assets, engines and algorithms, but also regarding localization. They know they wanted to go multilingual. It wasn’t a question of “if,” but “how, exactly?” There isn’t a commoditized solution, by the way. Just as everyone’s repository is customized to work on their projects in a certain way, such localization engineering integration is not a one-size-fits-all solution.
There are a number of methods and means to accomplish internationalization and localization integration with source code management, from Internationalization controls and Unicode support in Perforce (and the localization of Helix Swarm itself into Japanese), to having a Transifex-to-Github connector using a Sinatra (Ruby on rails) server.
Many of our conversations with developers focused on the orchestration of localization into the overall development cycle. If it is done too soon, you can spend cycles translating and re-translating texts and UI/UX elements that are still fluid and subject to revision prior to release — so placeholders may be more appropriate early on. Wait too long, however, and you may be faced with last-minute refactoring, or entirely re-engineering of your code base, to accommodate new languages and global audiences.
Wrap-Up
GDC is both a marathon and a sprint. It is a marathon because it shows the collective work of tens of thousands of individuals, some of whom have been laboring for years prior to setting up their booth at the show. It’s a sprint because you only have a few days to take it all in!
For Nectaria Koinis and myself, as the delegation from e2f to engage with game developers on their plans for globalization, it was exhilarating to talk to everyone we did. Thank you for your time and attention! There are a lot more conversations we had than these blogs and photos can show. And for anyone who didn’t get a chance to talk with us, feel free to email us at info@e2f.com, and let us know what your plans and needs are for globalizing your game, or your entire release management system, for 2017 and beyond.
Disclaimer: Unless otherwise specified, e2f has no commercial relationship with any of the products, services or companies mentioned in this article. Though we’d love to win your business, we’re also just fans of the gaming industry, like you!