C-THRU: Transparency and Simultaneity in App UI Design

A recent app UI design effort had us face what has become a familiar problem: optimizing the use of screen real estate on mobile devices. Given the limited screen size typical of most mobile devices, here’s the dilemma:

1) increasing app sophistication often leads designers to want to display more UI elements

2) any elements displayed must be sized to allow easy operation with one’s fingers

Compounding this problem is the drive of most designers to “simplify”, a bias towards minimalism artfully represented in Apple’s “a thousand no’s for every yes.” 

thumbnail-1

In this particular case we were designing a consumer-facing native iPad app called Kinoke. Part social network, part private journal, part photo/video album, Kinoke’s proposition is to get users to reflect on, comment, and share personal letters, photos and videos in a totally private, non-commercial way (invitation only, no ads, no collecting and re-selling user data). Given that the demographic includes older users, the UI had an even greater need to be simple and uncluttered. 

The heart of the app is where users comment on a particular item, say an old family photo. We call this the Comments screen. We were designing this screen at the pixel level, and we hit this impasse to do with balancing simplicity, complexity, and accessibility.

The Comments screen needed to be simple so users would not give up trying to use it the first time they tried, and so that they would return to it often and with pleasure. It was also going to represent something complex, a display in which the old photo and related comments could coexist without one compromising the other. And users had to be able to quickly shift their focus from the old photo to the  comments and back again … both had to be instantly accessible.

We considered devoting about half the screen to the old photo, placing user comments adjacent. But this forced both the photo and the comments to be smaller than we wanted. We considered flipping or panning the view to alternate between comments and photo. But this was going to put a lot of distracting movement on top of what we knew would be a moment of concentrated contemplation. Having elements fly in and out of view is great sometimes, but maybe not when you’re trying to put words to a memory or feeling, especially if you’re 70.

We knew we had to get the Comments screen right, because that’s where the users create value in the app. We wanted no movement, we wanted the photo and the comments to each be as large as possible, we wanted near-simultaneity. All that led us to a control that would let users modulate a two-tiered space using transparency. We call it C-THRU.  

thumbnailHere’s how it works. User comments, whether typed, audio or video, are displayed on a dark background through which a full-screen version of the item being commented is just visible: in the background but very faint. Near the left edge of the screen sits a circular button labeled C-THRU, and this button does exactly that. When touched and held, C-THRU rapidly cranks the transparency of the dark comment layer up to about 90%. The underlying item becomes clearly visible, but this effect lasts only as long as the C-THRU button is pressed. As soon as C-THRU is released, the dark layer returns, the item fades back behind it, and we’re back to typing or speaking or videoing our comment as before.

One user described C-THRU as “putting on x-ray glasses.” We feel it lets users remain fully immersed in a task while giving them the chance to observe two things seamlessly. 

And C-THRU really paid dividends when we took on the task of porting the Kinoke iPad app to iPhone a few weeks ago. With even less screen real estate on tap, the C-THRU button’s effect of quietly transitioning between the two tiers of Kinoke’s Comments screen seems indispensable.

My First Trip to Vietnam

Amongst Waverley’s multiple development centers, the largest are in Kharkiv, Ukraine and Ho Chi Minh City, Vietnam. Engineers at both offices work on similar projects (at times they collaborate on the same projects) and have been getting positive feedback from our clients. As the lead QA Engineer in our Kharkiv location I’ve developed a good working relationship with our QA team in Vietnam but didn’t know any of them personally. On top of that I didn’t have any first-hand experience of Asia. I discussed this with two of our executives: Matt Brown (CEO) and Patti Gosselin (COO). Soon after that meeting I ordered tickets to Ho Chi Minh City and started planning my trip. 

I landed in Ho Chi Minh City at midday. A project manager from our Vietnam office met me at the airport. He was so friendly and happy to see me that I decided to visit Waverley office first and check in to my hotel later in the day. 

It was mid-September so while still warm in Ukraine it’s nothing like the 77-86F I was met with in Ho Chi Minh City. But I’d checked the weather and I expected it to be hot. What I didn’t expect was a perfect taxi service. I’ve never encountered better taxis than in Vietnam. The drivers are instantly recognizable in their green uniforms and just need to hear the address or see a business card. As in Kharkiv most people in Ho Chi Minh don’t speak English but those “guys in green” do. If you don’t see one nearby you just call a taxi service (Vinasun is the best one) and ask for a car.

VN triptychThe heat out on the streets contrasted nicely with the temperature inside the office. Good air-conditioning was common in Ho Chi Minh – seems like air conditioners are everywhere. In fact, regarding work places, work stations, equipment in the office – everything felt similar to offices in Europe and the US. I’m not sure whether this applies to all offices in Vietnam or only the Waverley office but staff report to work at 8-9 AM and go home around 5-6 PM (in Kharkiv most of our staff arrive and leave two to three hours later, to synch with clients in the US). During the working day the folks in our Ho Chi Minh office have coffee breaks and a one-hour lunch. What I appreciated in their working process: synchronization. At any time you can find a technical specialist in the office; no need to call or chat via Skype to ask a question. Also people in the office prefer to have a lunch all together: it’s like small team-building exercise every day. And what was unusual for me: people sleep in the office if they don’t want to go for lunch or finish lunch early. So during the lunch break it’s possible to have a meal and sleep a bit to refresh brain and body.

During my visit I asked to have a one-on-one meeting with each QA team member to get a sense of strong and weak areas of their knowledge. After completing all meetings I concluded that the team is very motivated to work in IT and have good educations, almost everyone has a technical background, they understand the testing process and generate proper reports, and all are willing to learn more and grow like into true technical specialists. I did at times have difficulties understanding their English pronunciation. Often Vietnamese omit final consonants and medial sounds, confuse sounds etc but I think it’s just a question of time and practice on both sides. The more one communicates with people from Asia the better understanding of their English one has. And any problems are really limited to pronunciation. No problem with their writing and they also understood my spoken English well. 

For me in my first visit, Vietnam felt like an unusual country. Food is different, a lot of scooters and motorbikes and innumerable very small shops – not like in Europe or the US where we can buy everything we need in a supermarket. People are very polite, friendly and ready to help at any moment. What I really appreciated and what stayed with me was the sense that anything a Vietnamese person does, he or she does out of consideration for the welfare of the family, rather than for themselves alone. 

I am definitely interested in visiting Vietnam one more time to shake hands with the people I met, to meet new people and get a feel for Vietnamese culture one more time. And I hope to do it soon!  

Future of JS – As Discussed in Barcelona

This past May the JavaScript community gathered at the FutureJS conference (http://futurejs.org) in Barcelona, Spain. After a few years of semi-stagnation JavaScript has been seeing renewed interest amongst developers. With browsers approaching the status of operating systems for the Web, JS as the principal browser language is getting a lot more attention. 

FutureJS addressed contemporary issues of JS development. Speakers included Jeremy Ashkenas – creator of CoffeeScript language http://coffeescript.org, Reginald Braithwaite – author of the book JavaScript Allongé https://leanpub.com/javascript-allonge/read, Patrick Dubroy – Google Chrome engineer, guys from Facebook and others.

I was keen to get a “vision of JS’s future” directly from the people who call the shots. And it’s always helpful to step away from  everyday work and broaden one’s understanding the current state of JS and Web technologies.

The event was well-executed with balanced time for talks and coffee breaks. Evening meetups in hip bars and, during the day, long lunches (yes, it’s Spain and they take their siestas seriously) provided ample opportunities for informal communication amongst participants, organizers and speakers, including a great wrap-up party in one of the best night clubs in Barcelona (Razzmatazz).

Overall there were fifteen talks over two conference days (video archive is here). Some were dedicated to the JavaScript language itself: its history and possible future evolution, including the newest features of the just-emerging ES6 standard. Summing up the highlights for me: 

Reginald Braithwaite on Functional Programming and OOP

Reginald’s talk emphasized JS’s inherent minimalism. The language doesn’t contain ready functional constructs or concepts and doesn’t force us to use a functional approach, but has enough tools for developers to code in a functional style if they prefer (functions as first-class entities). The same with OOP – JS doesn’t support all the concepts of classical OOP, but Reginald showed how we can emulate them using objects and prototypal inheritance.

Reginald also explained the idea of creating modular programs based on functions, thereby making code more reliable and reusable. According to this approach a program consists of two groups of functions: ones that implement business logic, main building blocks of an application; and service functions (composers, transformers) – general-purpose routines applied to the business logic blocks or to another service functions (they are going to be the same for different applications). For this approach to be successfully implemented the business logic functions have to be properly isolated and encapsulated.

Jeremy Ashkenas on Using JS in Commercial Projects

Jeremy, author of the CoffeeScript language, the Backbone.js JavaScript framework, and the Underscore.js JavaScript library described lessons learned using JavaScript in big commercial projects. He listed the main evils of JS: incorrect polyfill implementations, prototypes hell, different types of functions, scoping of var and others. Also Jeremy explained how CoffeeScript allows developers to avoid all those issues. And CoffeeScript being very similar to JS (in syntax and main concepts) makes it a relatively easy step towards more reliable, intuitive and comfortable programming (the caveat being the additional step of compilation).

Patrick Dubroy on ES6

Patrick toured us through its new features (specs can be found here) with an emphasis on how to use all these goodies just now when most browsers don’t fully support them. While such features as new methods of API can be easily polyfilled, the new language syntax and constructions require more cunning approaches. Enter the compiler Traceur. It takes code containing new ES6 features and transforms it to ES5 (or even ES3) compatible code. Patrick also demonstrated, through examples, exactly how transformations from ES6 to ES5 are done, from  elementary ones like the => (lambda) operator to more complex stuff like generators.

Jaume Sanchez on the new Web Audio API 

Jaume explained its main idea and constituent concepts. Web Audio (http://www.w3.org/TR/webaudio) enables the mixing, processing, and filtering tasks that are found in modern desktop audio production applications. The model of Audio Nodes – audio processing nodes connected into the processing net (or graph) – is the key concept of the Web Audio API.

Martin Naumann on Web Components 

Martin described the use of Web components (http://www.w3.org/wiki/WebComponents) to build modular Web applications. The coolest thing about Web components is that developers no longer need specialized frameworks and tools (like angular directives) or components built with other languages and technologies (for instance, Java applets) to create reusable, well-isolated, reliable widgets for Web applications. Standardized technologies like Shadow DOM and custom HTML elements can be used instead.

Pete Hunt on the Virtual DOM 

Peter presented the virtual DOM as an alternative approach of organizing data binding in situations where current approaches were not ideal from a performance perspective. The classical implementation of data binding is based on the key/value collections observation (Ember, Knockout). The main competitor of this approach is dirty checking (Angular). The virtual DOM bumps performance while working effectively with the data binding update history: the current state of bound UI elements is determined as a collection of changesets applied to their initial state.

Matthew Podwysocki on Event-based Programming

In Matthew’s memorable talk on reactive JavaScript programming he explained the idea of streaming and event-based programming using FRP (Functional reactive programming) and RXJS. He described the main principles of reactive programming: observable and observers, query operations and schedulers. Through vivid examples Matthew demonstrated how RX (reactive extensions) works in practice. The API for reactive programming in JavaScript is offered in the RXJS library. The idea of reactive programming is not new (I first came across it two years ago when I worked with MS .NET technologies), but there are interesting trends in its development, like using it in conjunction with JS generators (ES6).

All in all, attending the FutureJS conference gave me a clearer understanding of which aspects of JS and Web programming I should learn in depth and start using in real life. High on my list is functional programming implemented in languages other than JavaScript – Haskell, Scala, Closure.

I came away feeling that the future of JS is filled with promise, excited to get back to work!

 

Thoughts on Tradition, Appreciation and Teamwork

I believe work should be fun. And I think I can prove it. Some of you may have heard of the trend towards gamification in day-to-day project management. Following on that trend, here are some battle-tested tips that you can use as-is or modify as you like, Creative Commons, I wouldn’t get offended, promise:

  1. Make sure your team members have something to be proud of regularly. This can be tasks completed, a challenging problem solved, a three-pointer shot straight into the wastebasket =) Seriously without getting too far away from useful wins,  appreciate all of those.
  2. Shout out loud when you succeed! Remind the team to not be shy when a tough bug gets squashed or when the client is delighted! As a manager, make an example of yourself: do the wave after a successful demo, bust a move after holding out against scope changes mid-sprint, invent a traditional “winner dance” for your team or encourage a unique expression of happiness for each. Whatever you do, do it together and make it visible! I can shout “I’m the master of the world, boo-ha-ha” or just stand up and moonwalk.
  3. Most important, when someone celebrates his or her win, the rest of the team should applaud. Clap loudly for your team-mate’s moment of glory =) Yes, it is a Moment of Glory, nothing less, so give this feeling to your team members, they deserve it. 

That’s it! =) Repeat each time someone does a great job =)

 A few more comments…

When you get a similarly gorgeous appreciation system working, you will also need something “opposite” – a way to acknowledge dumb mistakes or failed team work without humiliating or belaboring the mistake. Allow the team to laugh together while still acknowledging that a mistake was made. Buy a “stupid” hat, or a Blondie doll (people with light hair, I do understand that you are as smart as people with other hair colors, even those who dye their hair), or Gold medal for anti-clever solution … Turn your imagination on! Try to remember what word is used in your team for such “hits” and give it a material symbol.

This “trophy” can be used as award going from one person to another =) But don’t forget #3 – give an applause for it as well.

And finally, here is a list of questions to bear in mind:

  • If someone doesn’t get to do a “winner dance” for a long time, what can I do as PM?
  • Number of winner dances vs team performance, any correlations?
  • How can these methods aid in team-building?
  • Do team members know what went well and what failed?
  • Will it add more fun to our work?

Building Scalable Systems

With this article I want to shed more light on a vital aspect of any computer system: scalability. Why scalability is important? The answer is very simple – it gives the business which is based in or supported by the system freedom to grow. An unscalable system is like a tree with very weak roots – as the load on it grows it will eventually fall.

Before diving further into the topic let’s define the term “scalability” for a computing information system. 

I personally like this definition: scalability refers to a system’s ability to handle proportionally more load as more resources are added. Scalability of a system’s “information-exchange” infrastructure thus refers to the ability to take advantage of underlying hardware and networking resources, as well as the ability to support larger systems as more physical resources are added.

Here I need to mention that there are two types of scalability – horizontal and vertical, where vertical scalability means the ability to increase the capacity of existing computing unit hardware. This approach is limited and quickly becomes unacceptably expensive.

Instead horizontal scalability refers to a system’s ability to engage additional hardware computing units interconnected by a network.

But here is the catch: systems that are built using classic Object-Oriented methodologies and approaches for system software design which work superbly for local processing begin to break down in distributed or decentralized environments.

Why? Because a distributed computing environment brings a whole new class of challenges to the scene. 

Distributed systems must deal with partial failures, arising from failure of independent
components and/or communication links (in general the failure of a component is
indistinguishable from the failure of its connecting communication links). In such systems, there is no single point of resource allocation, resource consumption, synchronization, or failure recovery. Unlike local processes, a distributed system may simply not be in a consistent state after a failure. In the “fallacies of distributed computing” [Van Den Hoogen 2004], summarized below, the author captures the key assumptions that break down (but are nonetheless still often made by architects) when building distributed systems.

  • The network is reliable. 
  • Latency is zero. 
  • Bandwidth is infinite. 
  • The network is secure. 
  • Topology doesn’t change. 
  • There is one administrator. 
  • Transport cost is zero. 
  • The network is homogeneous (it’s doubtful that anyone today could believe this)

I prefer to treat this list not as a set of fallacies but as challenges a software architect has to meet to create a horizontally-scalable system. As an architect who has had a chance to work with large-scale systems, I can attest that if one attacks those challenges directly and adds code that resolves the issues one by one, the result is a heap of wiring code which has nothing to do with the business idea. And that code can easily become more complex than the system itself! Implementing communication transactions, zipping/encoding/decoding data, tracking state machines, supporting asynchronous communication, handling network failures, creating and maintaining environment configuration and update scripts, and so on… all this stuff evokes despondency when it comes to maintainability.

So – is there any good solution to make a system easily scalable?

Luckily, yes. In three words: data-oriented programming.

The main idea of data-oriented programming is exposing the data structure as the universal API between system parts and then defining the roles of those parts as “data producer” and “data consumer”. Now, in order to make such a system scalable we just need to decouple data producers from data consumers in location, space, platform, and multiplicity. Here the trusty old “publish/subscribe” pattern comes in handy.

Here’s how it generally works – a data producer declares the intent to produce data of a certain type (lets call it Topic-X) by creating a data writer for it; a data consumer registers interest in a topic by creating a data reader for it. The data bus in the middle manages these declarations, and automatically routes messages from the publisher to all subscribes interested in Topic X.

It’s time to draw a picture to illustrate how the classic client-server architecture would look had it been designed as data-centric system 

As you can see all system components are isolated and have no knowledge of each other. They only know the data structure or “topic” they can consume or produce.

And now imagine that the number of clients that wanted to consume information from our system increased so that our system could not resolve all the requests in time. – Let’s try to scale this system horizontally.

On the figure above you can see that I have increased number of business logic processor units. This is easily done because the system doesn’t care which computing unit will do the job and doesn’t even need to know that the units actually exist. Each system unit just waits for the data it can consume or publishes data it has declared. Also I’ve easily decoupled client input and client output, spreading the burden to different servers. Since only the number of clients that want to consume information from our system increased, we add more servers that will handle read requests. Also in order to avoid bottlenecks on DB access side I’ve decoupled DB writes and DB reads and allocated more computing power to the ‘read’ side. Of cause in reality those things are more complex,  but this figure shows basic principles of system scaling.  

There are several more important benefits of the data-oriented approach:
1) It’s easy to make system more reliable by adding redundant processing power. If one of
the business process units fail nothing critical will happen because other units of the same type continue to handle requests.
2) The system becomes more flexible – new functionality can be added on the fly by adding new data producers/consumers.
3) Maintainability goes to a whole new level since components are very well isolated one from another.
4) It’s easy to work on the system in parallel.

You can say that it’s all good but what should I do with my existing system?

Fortunately we can isolate all this data-centric publish/subscribe magic into a middleware layer that will handle all communications. And there are a wide variety of such solutions: 
http://en.wikipedia.org/wiki/Category:Message-oriented_middleware

What you need to do is define a system data model (most probably its entities will be very similar to the DB model you already have) and then create data readers/writers for each system component which will publish or consume data to/from the middleware.

In my opinion, most prominent and promising messaging solutions that support the publish/subscribe model are: 

1) http://kaazing.com/products/kaazing-websocket-gateway/ for web-based solutions

2) http://www.rti.com/products/index.html (or any other DDS implementation) for TCP/IP or in-memory real-time peer-to-peer communication. No brokers or servers in the middle. Instead leverage TCP/IP and IP multicast for real peer-to-peer message transportation.

But you are encouraged to conduct your own research. 

Practical hint: keep your messages small. Don’t try to push megabytes through your data bus in a single message. The data bus is a vital component and big messages can turn it to a bottleneck causing the whole system to struggle. If you need to transfer a significant amount of data from one system component to another, data producers should prepare and provide a link to those data, so that the data consumer can access them. 

Happy data-oriented programming! 

User Manual for Distributed Software Development Part 2

Continued from Part 1: How, in the day-to-day, does a distributed team share a codebase in a way that does not have members block each other?

This is the time to ask your in-house project leader, “Is it possible to split system development into independent chunks that could be implemented in parallel?”

If the answer is anything but “yes” – it’s a cause for concern. “No” likely means that system components are very dependent on each other, thereby making the system tightly-coupled. And a tightly-coupled system is an unscalable, hardly maintainable, inflexible system.

One of the key factors driving this grim reality is that Object Oriented Programming is, by nature, tightly-coupled. To meet this problem, the software system architect (project leader) has first of all to employ loosely-coupled design techniques to achieve system scalability, maintainability, flexibility and testability. If this task is solved – incremental and independent development will come by itself.

The main point is this: a properly architected system consists of fairly separate and independent modules or classes that have little to no knowledge of each other. Given such an architecture, it becomes easy to split the work by components and avoid interference amongst team members.

An optimized distributed team development process can be boiled down to the following 5 points:

1) Define task, describe, discuss and estimate it

2) Define team (project) roles and agree on formal communication paths

3) Balance implementation efforts of one portion of the team with code reviews from the other

4) Demonstrate (ongoing) results to the project stakeholders

5) Retrospect and review: what went well, what went wrong, identify points for improvements.

And there are many smaller, but still important points that will enhance the remote team’s output:

* Trusted engineer is interested in remote team’s success

* Both sides understand and appreciate a transparent and tailorable development process

* Trusted engineer provides feedback to the remote team regularly

* Use technology to improve collaboration (screen sharing, video conferencing, etc.)

* Leaders of both teams meet in person to align their vision on project goals, create an achievement roadmap, and, ideally, build the project backlog together.

 If you decide to use an “external muscle” to strengthen your product development, don’t forget to ask the remote team for their “user manual” and development process before things get going. Then make the investment to move your system towards loosely-coupled design principles and practices. If these things are done right, the “trust gap” will be bridged very soon, typically in 5 to 10 sprints. And it will result in a pleasant sensation as you lay down to sleep each night, knowing that your project keeps growing and moving in the right direction while you are sleeping.

User Manual for Distributed Software Development Part 1

Having worked as an offshore software development team leader for ten years I’ve often seen the same situation arise when engaging with new clients, and it’s no different at Waverley. It goes like this: a company (client) decides to hire an outsourcing company to help their internal team with product implementation. As business terms are ironed out, the client’s internal team checks the technology knowledge of the offshore team and if everything seems alright they start working together.

Almost immediately the problem of trust arises. In the first stage of building the relationship there is no trust for the new offshore team. This is absolutely normal, it is a given, a matter of human nature. To fill this “trust gap”, the client often names a trusted engineer as the intermediary between his company and offshore team. Typically, this technical person is busy enough with tasks that pre-date the engagement of the offshore team, has little idea how to manage a remote team or how to set up a productive distributed development process. Moreover, these “management” activities are just boring for an engineer (having been an engineer myself I understand this perfectly). Now add 7-12 hours time difference between the client team and the offshore team and you have a perfect recipe for disaster.

The question is how does one make the “Business owner <-> Trusted engineer <-> Remote team” model work effectively?

The short answer is: with the trusted engineer you have to introduce an Agile development process and the entire team needs to embrace loosely-coupled system design.

 Now to make a short answer longer…

 When we buy something complex it typically comes with a user manual which explains how to use and troubleshoot it. And when you hire a remote team you are buying something complex. So you should check not just business terms, technical parameters and qualifications but also ask to see the offshore team’s “user manual”. Any remote team that’s been on the market for more than a couple of years has its “client interaction patterns”. Understanding those patters is a very good starting point for building a new relationship. The converse is also true!

Here are a few questions you might ask the remote team leader:

1) What will you do to build my confidence that you are going in the right direction and building the thing I need?

2) How can I know current status of the project at any given time?

3) How can I know what you are working on right now?

4) By what procedure will be manage system changes if (when) we decide to make them?

 I’m not going to write another SCRUM handbook! But from my experience on the offshore side of the equation I can say that having a “Vision & Scope” document, a product (user story) backlog, sprint planning meetings, sprint backlog, daily standups, and demo and retrospective meetings helps a lot to make the development process transparent and predictable.

So the first thing to do with a remote team is align around a transparent and tailorable development process. This is a must – without a development process things will fall apart very soon.

 Now imagine you have that user manual: you’ve agreed on a development process, you’ve created a “Vision & Scope” document where you’ve captured your goals and metrics to understand which goals have been achieved, and you and your off-shore team have started moving toward those goals.

Here a second problem arises: working on the same project requires a lot of communication amongst members of distributed teams. While there are strategies for organizing this communication there is also the question of how to work in a way that doesn’t require permanent communication. How, in the day-to-day, does a distributed team share a codebase in a way that does not have members block each other?

Loosely-coupled design to the rescue! (continued in Part 2)

Effective Management – the Carrot or the Stick?

I’ve always believed that there are three vital components to the running a successful software team -obviously the talent of the developers is critical, but process and management are also essential. So what makes for effective management of a software team? There are many attributes, but here’s what I think is most important

  • Motivation. Although both “tough” and “kind” have their place, the carrot is more important than the stick. Creating challenges to motivate people and making sure those challenges result in positive team thinking is critical. I’d rather spend my time coming up with appropriate motivational challenges (thinking positively) than ranking people (thinking negatively).
  • Active listening. This means listening carefully and reflecting back what you hear in an empathetic manner so that the speaker feels understood. Everyone talks about it. Not many people do it.
  • Get out of people’s way. Why do you have team members in the first place? Because you can’t do it all yourself. So let your team members do what you hired them to once you set goals and talk about how to measure results.
  • Provide clear and consistent direction and goals. This seems rather obvious, but again, not many do it. Work on your team’s goals and communicate them, then constantly work towards achieving those goals with periodic reviews to make necessary changes.
  • Be excited. I think this is important. Just being excited about what is going on will help everyone perform better.
  • Turn disappointments into learning opportunities. When things don’t work out, turn the disappointment into a lesson learned and an opportunity for growth. Remember when one door closes, another opens.
  • Understand needs and feelings in yourself and others. Understanding your own needs and feelings will pay huge dividends in motivation and effectiveness. Always go back to the practice of what universal human needs and feelings are alive in you and your team, especially when things are tense or there is conflict.
  • Know your own weaknesses and work on them.
  • Leaders serve their teams. Being a leader means you are serving your team members and enabling them to do the best they can do. Do whatever it takes to make things work.
  • Clear and decisive, but caring too. Sometimes the most caring thing you can do is make hard decisions. Don’t prolong the agony and remember you can’t do everything – just make your decisions.
  • Use good tools, so the organization collects and refines its knowledge. All of the above are only as good as your methods to disseminate, store, and develop information and best practices within your team. So find tools that work for everyone and use them. Email is a start, but there are many more sophisticated and effective tools available today.

Google engineers not smarter than Vietnamese 11th graders

A recent article about computer science education in Vietnam caught my attention, having invested a lot of effort in the last year to ramp up our office in Ho Chi Minh City. In addition to Vietnam’s commitment to produce software developers with a high level of skill, I think a critical reason for sourcing developers in Vietnam is a cultural bias towards coming through for the team and doing what is needed to follow through on commitments. This is a great attribute: one that naturally fits with Waverley’s vision for doing business. My personal experience of Vietnam is that its young people (85 percent of the population is under 40) are friendly, motivated, and helpful. And the food is excellent! We look forward more great things to come from our office in Ho Chi Minh City and to having our Vietnamese colleagues contribute to our know-how and our client relationships.

Beauty or the Beast? Understanding Mobile Web and Native Application Development Tradeoffs

These days, when choosing a development strategy for your next mobile app, an essential question is whether to write it as a cross-platform hybrid mobile web app versus “going native”.

A hybrid mobile web app is an application written mostly in JavaScript/HTML5 and wrapped in a native shell using tools such as PhoneGap. A native app is written in a platform-specific programming language (Objective C for iOS, Java for Android, etc.) and is able to take full advantage of all device-specific features. There are also “pure” mobile web apps that run in a browser, but they are not really apps per se because they cannot be placed in platform stores such as Apple’s AppStore or Google’s Play.

There are many parameters to consider when deciding between hybrid and native app development. Many articles on the web provide “pros and cons” which aid analysis. But is there an easy way to understand the tradeoffs, as in the classic project management triangle?

2outof3

Here’s how it works.

If you want the most elegant and beautiful app that runs both on iOS and Android, be prepared to reach deep into your pocket. That’s because you’ll have to do native apps for each platform. You could probably design your app in a way that some code would be reusable, but the potential savings are quite limited. If you paid X dollars to design and develop your native app for one platform, be prepared to spend 70%-80% of X for each additional platform.

If you have a limited budget, and still want to reach the maximum possible group of users across multiple platforms, be prepared to sacrifice some of the slickness of the user experience. Why? With HTML5 and JavaScript, it only costs 15-30% extra to support each additional platform. You could even afford to include Windows Phone 8 which is gaining momentum, and not break the bank. But complex animations, scrollable lists of transparent images, certain background processes like always-on location tracking: some of that stuff is going to have stay home. Javascript doesn’t have what it takes to pull these off smoothly.

But if you are developing an enterprise mobile app, then you can usually do it in a cross-platform hybrid and do it cheaply. Your audience may not need a top-notch user experience. After years of working with your current enterprise application on Windows, designed in say the early 2000’s, will your users really be that demanding for UX? Even if you believe they will, do you have the budget to address this perceived need? If you do, we’d like to hear from you!

Don’t get us wrong – it’s certainly possible to develop slick, beautiful apps in HTML5/JavaScript as we know from experience. Been there, done that. But be careful – you need to know the pitfalls and limitations of the technology stack you are choosing. You need to know what can be done and what can’t be done. Or you need a developer that knows, and can bring that knowledge to your project. At Waverley, we love AngularJS because it allows us to build really slick JavaScript apps that function well on all major platforms. More on that in another post.

What is your experience with choosing an approach to mobile development? What route have you chosen, what were the tradeoffs you had to make? Did the approach you chose meet your expectations? Share your story!