The revolution will not be televised, you’ll make it in the garden shed

This week saw the launch of defcad.com. Dubbed as the Pirate Bay for 3D printing, they’ve hit major news headlines for their desire to publish blueprints for firearms. Now, I cannot agree with this notion and find the way in which founder Cody Wilson is going about raising money for the project. I don’t agree with access to firearms for the general public and especially in that there is no justifiable reason for a member of the public to own an AR-15; you shouldn’t bare arms full stop. However, my instant thought was to revert back to James Burke speaking last year at dConstruct. In his closing he presented a view of the near future and nano factories in every home.

Pushing further down the page and reading the manifesto for Defcad, you may be forgiven into thinking that this is in fact somebody attempting to move closer towards this potential utopia of self sustainability with his statements relating to law change, regulation and patent owners kicking up about the ability to print your own Ford GT for example. You would be wrong. There is no agenda from Wilson other than to stir up some shit and rebel against a government he doesn’t like. He is in the truest form a fuckhead.

Sadly, I believe that we need these fuckhead’s to be able to reach an existence where there is no currency, there is no need to work, anything you could want you can have. The future whereby everyone has their own nano factory slowly creating just the things they need and enjoying the world in which we inhabit.

James Burke audio from dConstruct

Response to images

During Owen Gregory’s talk at Responsive Day Out he touched on the theory of the Golden Ratio in design and how you can still use traditional design theory in a fluid world without compromising on form.

Many others throughout the day had discussed the current issues we face when approaching the use of images in design and content in designing for a fluid web. There were discussions on file sizes, how to serve up images, what kind of image files we should use and when to use them, the types of images, are they for context or decoration?

Bruce Lawson went through the ideas of serving up entirely different versions of an image to align with a particular layout. In His example he showed an image where a dog sat centrally in a photograph. Using breakpoints, he presented different crops of the same image citing that these were better contextually based on the viewport size. I disagree with this, but that’s another story.

What nobody questioned however is whether images themselves are actually the plague on good design when it comes to the web? Are they the devil we know that can never be changed?

Last year I was working with Paul Swain on a project which we knew was going to be heavily lead by imagery. During the wireframing stage Paul was placing image placeholders in that used a 16:9 aspect ratio. I asked him why he kept putting them in the layout? His response was that 16:9 gives more data in an image. For example in a sports scene it allows for the subject to be the focus with his surroundings given extra space to breathe. He also noted that when designing a fluid site we should consider that the screen sizes are all moving towards widescreen, columns are going to become wider than they are long.

I couldn’t let this go. As a photographer, I have never owned a camera that shoots images at 16:9; I have a FlipHD handheld video camera that shoots in 16:9 but none of my Nikon SLRs do. All I could think of was somebody having to spend more time shooting with a taped off back screen (see below) and more time editing before getting a story out on a news site for example. I brought this scenario to the table and we removed the placeholders exchanging them for 4:3 placeholders, if the image was in a widescreen format, they had a bounding box they could fit it into.

It did get me thinking as to whether the likes of Nikon will completely lose their minds and do away with the 4:3 ratio we are all used to? Of course, there are other image sizes that are still commonplace today notably instagram with it’s square images in a homage to the Polaroid and the hipsters do love their Lomos.

Owen Gregory questioned why we have these ratios in our devices that do not meet with the golden ratio? Modern displays, flat screen televisions and monitors use a16:9 ratio whilst older displays were set at 4:3 as are most cell phones, the iPad and many other tablet devices such as the Kindle and Galaxy Tab. In 2012 Apple The iPhone5 is 16:9 and along with the Samsung S3 and Nexus 4 are evolving handheld devices into the widescreen HD era.

How did these ratios come to be and can we ever have visual perfection?

Cut along the perforated line

Well, it isn’t but that is where it all starts, with William Dickinson and Thomas Eddison creating a concept of a roll of film and a loader for a camera.

Edison and Dickinson wanted to create images which gave the same level of detail as when looking straight ahead; minus the periphery. The human eye has a field of view which is 155oH x 120oV (4:3).

Their film wrapped around a spindle with sprockets gripping it in place. The frame needed to meet the 4:3 ratio and finally they concluded the ideal size was 35mm wide and 3 perforations high (the distance between the sprockets).

Does it have legs?

Naturally, when artists started considering moving pictures the starting point was to use 35mm film and lots of it. There were experiments with other formats as the movie makers started to look at how to make a more immersive experience. As with all good art, the marketing people had other ideas, how do you get more people in front of a film and make more money per show? The natural solution? make it wider. Wider screen = more seats.

The most notable movement came from France where the Paris film scene was booming. In 1897 Raoul Grimoin Sanson patented Cineorama, a widescreen film format, it never took off.

Other concepts emerged during the early 19th century including cinemascope which was costly to produce requiring two camera operators and further time editing. With the Depression came cut backs in Hollywood and the anamorphic format given with most widescreen solutions was dropped, returning to the cheaper 35mm 4:3 systems.

Alfred Hitchcock refused to shoot in cinemascope citing that it created an unnatural and displeasing image, instead favouring VistaVision which could be adjusted to suit a number of aspect ratios.

The Golden Age turns to Pyrite

As televisions found their way into more and more households in America Hollywood started to feel the pinch. Here is where we see history repeating itself (see 3D in 1915, 3D in the 1950s, 3D in the  1980s, HD and 3D again in the 2000s). Film studios began experimenting with dimensions again purposefully shooting in wider ratios that needed new projectors in the cinemas affiliated with the film studios and providing a cinematic experience which clearly set it apart from television.

It caused chaos. Letterboxes became a visual cue to what kind of film you were watching and ensured that every cinema could syndicate the movies into their theatres without needing to upgrade their equipment.

Many of us will remember (I’ve actually seen it in France in the last month) the frustrations of watching a film bought by a television network where the titles are chopped off left and right and you can’t help but wonder whether you’re losing important plot points to the outer limits of the shiny pastic surround encasing the liquid crystal display.

Thankfully, Dr Kerns Powers rallied for some kind of standardisation to be formed. Eventually 16:9 was agreed upon, not because it is the most pleasing ratio to experience moving pictures, but because it is the middle ground in a muddy film landscape. This is why even now with your HD (or even UHD/4K) screen you will see letterboxes when watching certain movies.

How do ratios fit into responsive design?

I have on quite a few occasions attempted to mimic another film format to the one I am shooting with. Back in 2009 (the golden age of Flickr?) I was inspired by Dustin Diaz’s 365 project where the majority of his shots were 16:9 and letterboxed. I spoke to him at the time about how he was doing this and tried it myself.

Perhaps we can start thinking about this in our web design. Can we use margins and padding in the same way as to create faux widescreen experiences, or even bring widescreen back down to 4:3 after all, isn’t this what we’re doing every time we set body to margin: 0 auto?

Responsive Prototyping

Recently Axure Co Founder and Product Manager Victor Hsu, posted in the forums that version 7 is going to be delayed so that they can spend more time with new features for responsive prototyping. Unless the application is going to be a complete rewrite of ode and concepts I can’t see how this will ever work. I like many people driven by employers and clients to use Axure for create ‘wireframes’ have on several occasions made mobile and desktop versions of a prototype in Axure with limited degrees of success. The overal problem with a tool like Axure when it is put into the modern (although not that modern if you enjoy the rants of Jeremy Keith this week who reminded us that many people have always worked with fluid layouts, is that Axure is fixed width at its core.

When I first used Axure back in 2007, it gave my team and I an opportunity to reduce the time wasted writing detailed functional specifications and designing detailed screens for an application that already had a defined layout. It gave us the ability to focus on features and not to concern ourselves with the application chrome.

For us; a development team consisting of database/system admins, php developers, front-end developers and systems testers, it saved us huge amounts of time.

Providing we wrote detailed notes on widgets and pages, worked with variables and state changes with panels we were able to maintain a virtual copy of the live application in Axure that could be validated with the product owner, give the dev team the necessary information to write and build the new branch of the app and meant that our tester didn’t have to write as much for her test scripts because there was something to validate against.

And this is where I would like to leave Axure. It is a fantastic tool for prototyping applications. Semantics aside we all know there is a clear difference between what the general public may call a website, and an application. Websites are the things we lose hours on looking at cats and reading drivvel, applications are the things that you put something in and you get something out.

I don’t know what is going to be in Axure 7, but they have some competition on the horizon from the big boys namely Adobe.

This week I got to see the preview for Adobe Edge Reflow. Although Reflow is being promoted as a responsive web design tool, I see some major opportunities here for creating prototypes by the audience that Axure is aimed at – people are are not able to write code.

During the demonstration, Piotr Walczyszyn walked through creating a simple site layout; header, footer content blocks, using the graphical interface that allows you to drag divs across a column grid system which you can set up specifying number of grids, gutter size and what the total width wil be in percentages.

The interesting part of Reflow is when you start to look at what happens when you resize the canvas. Using some drag controls you can expand/contract the canvas until something starts to look rubbish and then add a breakpoint. From here you can adjust the elements on your layout repeating ad infinitum.

Just from the demo it is clear that you can visually put together acceptable site concepts. I do however have some gripes with Reflow.

During the demo, Piotr regularly switched between using pixels to set maximum sizes and percentages. For example, when setting a breakpoint for a background image after a certain width he set the max-width of the image using pixels, this was after the introduction explaining about flexible images as described ages ago by Richard Rutter.

Classic Adobe code bloat. Edge Reflow uses HTML5 Boilerplate at its foundation for your creations. A good start yes, but as with the majority of these starting frameworks it is known for having a lot of shit in it you’re unlikely to ever use. The example html file created during the demo saw all the tell tale signs of html generated by an Adobe product. Every single div on the page had a derogatory id tag. box1,box2,box3,box4,img1,img2,img3,img and so on and so on. if that wasn’t bad enough, every image was given the class image. Stating the bloody obvious?

All your CSS is added to a reflow.css file and is normalised with boilerplate – not that it can be making an huge wins because as with the html the reflow.css is huge and contains a great deal of repetition (it has to when #box1,#box2, #box99999999999 are most likely going to be identical.

Finally, and I am sure this is just going to be in beta…… the generated code only works in Chrome. in fact, they have even gone to the lengths of putting a disclaimer into your page code for you so that on body load if the browser is not Chrome, it will instead show an annoying text block:

Preview HTML generated by Reflow is meant to be viewed in Google Chrome and may not display correctly in other browsers.

Now I wonder why they’ve done that?

Even with all of this said, I don’t think you should discredit the possibilities for Adobe Edge Reflow which is now in the preview on Cloud and you can download it now. It could become a great tool for responsive site prototyping and quickly learning where issues in the layout are going to arise without the need for knowing the code behind it.

 

Welcome to the Adobe Community

Many years ago, I worked in network support. As a result I ended up with a raft of industry qualifications to validate my experience and knowledge to my employers clients (but really so that they would pay me more than a paper boy), ranging from SQL Administration, Windows, Cisco, Linux, I’ve held them all at some stage. My most favoured certification was to be awarded an Adobe Certified Expert status (ACE). This involved learning the in and out of an Adobe application and gave you an acknowledge qualification for teaching others how to use that application. I held ACE status for Photoshop & Dreamweaver and was very happy teaching people how to make their ideas a reality and avoiding the design view in DW!

The scheme disappeared over recent years from what I could see but Adobe have evolved their schemes and I am now a member of the Adobe Community and have a new title to put to my signatures of Adobe Community Professional. What does this mean? Well it means I get to try out some of the new apps as they go into beta stages and share my experiences with anyone who wants to listen. It is a great opportunity to be involved in shaping the way Adobe products work and I am very interested to give the Edge product suite a thorough soaking.

Right now I am playing with Brackets, a new code editor which has been built in web code. Pretty exciting stuff. The interface on first glance is pleasing, I’ll write up some notes on how I get along with it over the next few weeks.