Personal blog of Elijs 'X2Eliah' Dima

Do we really need a new console generation?

All right, here’s a sentiment for you that you’ve very probably heard countless times on the Internet’s gaming-related districts. Ahem. “I can’t wait for a new console generation, because they are holding game development back. Especially for the PC master race.” And, a few weeks back, I shared that sentiment (mostly). Because, well, it is obvious, isn’t it? Games that are multiplatform are developed for hardware with 720p resolution, stupid-low amounts of ram (512 comes to mind) and controls appropriated for, oh, about 3 buttons and a jigglestick (idk how game controllers work). But.. Between then and now, I sort of thought about it a bit. And now? Now I really am not so sure if we really need a new console generation. Especially if by “we” I would choose to mean PC-gamers.

A common enough argument for the problems brought on by consoles is the inherently limited level design – because consoles have the same size of random access memory as tapeworm synapses, all videogame levels (areas) are by necessity small, cramped, split apart with numerous loading-screen-doors. Heck, just look at games like Fallout: New Vegas, which had to divide it’s key centrepiece – the Vegas Strip (that’s a street, not a stripper bar) – in distinctly split areas with massive walls blocking line of sight and movement. And, sure, us PC gamers suffered from that console-based problem, since we also got a wall-ridden Vegas just as everybody did. So clearly consoles are forcing developers to make crappy levels/worlds, right? … Well, not really, no. There are multiplatform games that prove by example that you can have massive sprawling worlds and intricate level designs.

How about TES5: Skyrim? The surface world is staggeringly gigantic, and it is one whole, seamless block. As for cities, past the external wall they all are single massive areas too – there usually are no big gates right in the middle of a town that you need to cross. You enter a city, and it’s all there for your eyes to feast on. Or, taking another game, how about, say, Grand Theft Auto 4? Again, we have a singularly humongous open-world city with pretty much no loading screens just for moving around. Oh, I’m practically certain that the game has behind-the-scenes loading in the form of background asset streaming. Well, but so what? Who cares how it is done behind the scenes, if the illusion presented to the player is one of seamless massive open world? Yeah, with about 16 Gigabytes of RAM, you probably would have enough space to actually front-load all the world and not touch the hard drive / disc at all after that. Would it really give a noticeable benefit? I’d guess that probably not so much, no. And just how likely is it that the next console generation would have 16, or, heck, even 4 GB of ram? Pft, yeah right. Ain’t gonna happen.

It’s not as if just the open-world games could be used as examples here. Even with locked-down levels, we still see games that manage to provide branching, wide, open, duplicate/triplicate/multiplex paths. Games like, oh, I don’t know, Deus Ex: Human Revolution? Yeah, it has loading screens. But they are not after every two corners, and when they are in place, masked as elevators or decontamination rooms, they are in places that make sense from an in-game perspective. More than that, they by their very nature provide a welcome break from the constant on-edge action, very much like the pipes and ledges of Mirror’s Edge provide a break from the momentum of free-running across rooftops. Oh, hey, that’s another game that is on consoles and has pretty large and diverse levels!

It’s not that there aren’t any games that genuinely do suffer from console architecture in the way their levels are built. It’s just that.. Well, seeing the counterexamples, there’s no other conclusion to make but one – it’s not so much the console specs as developer ability to design and accommodate that causes these problems. Well, that and there’s just a lot of game designers that create crappy linear corridors intentionally (Hello, Mass Effect 2/3! Hello, generic broshooter game! Hello, Shift2: Unleashed!).

Okay, but enough about level design. There are bigger argument-fish to shoot in this barrel of an article here. How should I best put this… Welp, keep in mind that now I will be arguing for/from purely the PC-gaming side, because quite simply I don’t have a console (and I am writing this to determine if lack of new console generation is hurting games on the PC). Okay, here goes. “Because the suckboxes and failstations have 10-year-old graphics hardware, modern games look like utter crap even on my 2000-dollar-supercomputer! Waah waah my iPhallus isn’t satisfied!” Yeah, that might have been a pretty harsh summary of most PC-gamer complaints on the graphical aspect of the problem, but… Well, just browse PC gaming forums and that really will be the most obvious summary. Now, obviously I can’t object about videogames on the consoles not really looking all that spiffy these days on a technical level. There’s the ever-constant low-resolution texture-work, there’s the sheer limit on the polygons one can display at any one time, there’s the lack of resolution options beyond the really small 720p… And, yes, if console games were ported 1:1 to the PC as-is without any optimizations or considerations – much like Dark Souls: Prepare To Be Violated game just recently was -, then they do look like rubbish. Technically. But that’s the thing, game developers can choose to spend extra time on the PC versions and port (or side-develop) them with more graphical bling and flair. They can add support for higher screen resolutions. They can add post-processing effects on a larger degree. Heck, they can offer up high-resolution texture packs for free as day-one dlc. Sleeping Dogs, Skyrim did just that – and why not. But this is just one aspect of the graphical issue we have to consider.

I feel fairly confident in saying that the early PS3/Xbox360 games still looked way, way worse than the most recent ones. Why? Because developers were just starting to learn the hardware and the engines. They spent a lot of time just trying out what works and how to do stuff. Over time, working on the same platform, more and more knowledge is accumulated and better and better ways are imagined to achieve solutions to resource-problems. Now, with most developers being fairly adept at developing for this generation of consoles, there is more time and more leeway for the artists to have their way, more diversity in what’s possible to do. Would getting a brand new & more powerful platform help in that aspect? For the first year or two, no, not at all. A new platform is like a curveball, it forces developers and designs to start again from scratch, to poke and prod the system (and waste time doing that) to see what works and what doesn’t, and how to achieve things now. I’d argue that now we are in a position where, finally, the graphics arms race has lessened a little bit – because games have reached the current limits on console hardware – and now developers are practically forced to think of useless stuff like clever gameplay design, gorgeous aesthetics, intricate plots, things like that, to attract more players. Throw in a new, more powerful baseline platform, and once again we will see mostly cookie-cutter games all striving to whore themselves out to the goddess of graphical blingosity. And, seriously, fuck that lady. Do I want games to look good? Sure. Matter of fact, with good porting they do look good on the PC anyway. Do I want all developers to enter another dick-waving contest over their game’s graphical prowess? Hell no.

Matter of fact, it’s not even the purely technical graphical bling that defines whether a game looks good or not. It’s entirely dependent on the artistic design and use of special effects. A game that has more lens flares, more dynamic shadow-mapping, more motion blur, more film grains, more colour-removal-filters, more superhyperwtfaliasing – it doesn’t mean the game will look better. You can very, very easily ruin something by overusing a special effect to the point of idiocy. Star Trek: No Subtitle says “Hello”. Does that mean that all lens flares are bad? Heck, no. It’s just that they should be used in context and in moderation, to create an appropriate artistically intended look, not “OhMyGod Look At Our Graphics They Go Over 9000!!! /Swoon”. And that ability to plan and design the game’s appearance is not limited by console hardware. Not at the current generation, because let’s face it – it allows plenty of options to make games look realistic enough. Or pretty enough. A new generation of consoles won’t make games look 10 times better or anything. What it will do, quite likely, is require developers to spend 10 times more money and time on building the same amount of art assets in higher fidelity. If it won’t require them to do so, it will definitely encourage such action. And, hey, you know how the major AAA publishers are practically falling over dead by spending so much on game development that it becomes practically impossible for them to recover costs and make a profit (hi there, Dead Space 3)? Yeah, they really need new console hardware that encourages massively larger graphic-monkey-dev teams. Yup. That’s gonna save the industry, for sure.

In the end, then, can we really say that a lack of new console generation is holding games on the PC back? Not really. Developers have the choice to make proper ports, with upgraded visuals and all sorts of fiddly technical bits and bobs. Heck, developers have the choice to do PC-unique development for their game. And if they do go multi-platform, they have, by now, sufficient enough expertise with the baseline to allow themselves a certain degree of artistic leeway and experimentation. In fact, that’s the nub – I’d posit that a new console generation will be what will really hold pc gaming back for a few years. Because the only improvements you will see will be on the graphics front again. No clever mechanics, no witty art designs, no innovative stories. Because the only games that truly “need” a new console generation are the ones that are killing the industry with serialization, formulaic design, overblown budgets, storytelling capacity of a deep-fried Chinese noodle, and artistic bankruptcy. And the games that most commonly are pc-unique? The indies and the experimentals of modern gaming? Do they need more powerful consoles? No. And neither do we. But, seriously, a new console generation in 2013/14 is inevitable. You know it’s happening, I know it’s happening, the devs that already have access to devkits of the new consoles sure as hell know it’s happening. And it makes me sad. Because inevitably we will witness a new arms race for shinier graphics. This will be a new tsunami sweeping over the gaming industry… And given it’s current state, perhaps it will be something that many publishers and developers won’t recover from. So before you whine on steam forums or youtube videoseries about stupid consoles holding back gaming because “the wall textures are not pixel-perfect when zoomed in, ohmygod, worst game evurrr 0/10 /totalbiscuit”, think about what you really are asking for. Think about all the multiplatform games that are, frankly, kick-ass great. Think about all the superb indies, if you are such a hipster and hate all AAAs on principle. Nothing actually needs a new console generation. Nothing beyond graphics whoring, at least. And if you are a graphics whore, and if you want to sacrifice gaming industry to get those immaculate hyper-realistic wall textures, then screw you.

~X2Eliah somehow got very angry while writing this post. It was not intended, but he doesn’t think he actually can write an entire post on this issue – consoles “holding back” gaming – without seeing the inevitable ruin of many things dear to him that a new console generation and hype over graphics would bring.
Edit: I’ve been reminded about neglecting the Nintendo-spawned reimagination of player-control interfacing. The motion controls, the WiiUs, the Kinects, the Moves. Why did I not bother to mention them? 1) They don’t apply to “(multiplatform) games on PC are held back by consoles”, 2) Honestly, I just don’t see them as positive/beneficial things.

2 responses

  1. Reblogged this on Gigable – Tech Blog.

    21/09/2012 at 13:09

  2. arron

    This has been one of the arguments during the Spoiler Warning Half-Life playthrough that a game that was released back in 2001 could have been re-released with a patch that looks at the amount of memory available on the hardware and loads the whole lot in one go. Disks and on-board memory are already at a stupid size compared to 10 years ago. Loading screens are now much shorter than they were. Why not go the whole way and remove loading screens completely and use resources that are now currently available? We’ve got gigs of on-board memory compared to a few hundred MB a few years ago.

    If you can load levels asynchronously in such a way that the next level is being loaded into memory whilst you’re playing the current one..then you could do away with inter-level loading as well. The game would appear continuous.

    22/10/2012 at 15:30

Post yer opinions

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s