conversational ui

Conversational UIs are coming

Conversational style UIs are gaining more popularity. In an era where people are living and breathing text messages and have an attention span for only nuggets of information, I foresee these conversational apps to become the norm. I would even argue that as speech and video recognition gets more and more accurate, we will have these conversational apps morph into virtual assistants. Check out this news app that can take a news piece and break it down and make it a conversation. http://qz.com/613700/its-here-quartzs-first-news-app-for-iphone/

data

graphs all around us

Do you ever think about how data in your organization is connected or what is buried in the tables/rows that your app writes to? Graph databases have been around for a while but for the first time they are becoming so accessible to app developers with products like Neo4j. I have been working with NoSQL databases in the past 2 years but I just started getting into graph databases in the past 6 months and it totally changed my perception of the value of data. My side project/startup Curia is all about connecting the right people at events and conferences. This is where Neo4j came in very handy. Utilizing it as a secondary, purpose-built database allows us to focus on finding the right connections on a graph in blazing speeds. Having attended a few hands-on training courses on Neo4j I can confidently say that every programmer should think about what is locked in the structure of their database and what can be unlocked with the use of graphs. You may be surprised.

engineering

Micro Services for engineering growth

Organizational impact of micro-service architecture Using micro-services in your engineering organization is where the hype has been at for the past few years. As others like Martin Fowler have pointed out , building a micro-services architecture based on the hype is probably a bad idea as a true micro-services architecture comes with a whole set of new challenges that probably is not worth it for a good chunk of use cases. Since the technical comparison of monoliths and micro services has been done several times (here and here) I will not go into whether or not you should pick a micro services architecture. Instead, I want to focus on what happens to your engineering team communication, responsibilities and overall impact on your engineering organization if you were to pick a micro-services architecture. Team driven vs. standardized architecture One of the big decisions an engineering organization makes as they go micro services route is to decide whether or not all micro-services will look the same or if teams will be given the freedom to use the tools, languages and architecture that makes sense for them. I am all for experimenting and picking a good fit for team and the engineering organization based on the makeup of the engineering team. As such, I have preferred giving teams the freedom to choose their own implementation strategy as long as the overall ecosystem adhered to a few key rules that impacted ease of communication between services, log collection/analysis and monitoring. Decentralized decisioning By giving teams the ability to pick and choose what they think is best for their use case, engineering organizations essentially start pushing technical decisioning from a centralized model to a more distributed one where decisions are made at the edges. The first thing that this does is to spark more architectural discussions, debates and designs within your team. This creates an opportunity (and a good challenge) for engineers to start communicating their ideas to other engineers more effectively and helps them develop their verbal and written communication skils. Making experience years count If you end up with teams that are autonomous in their design and architecture decisions your engineers will probably end up making some unproven assumptions but as long as these are incremental, small micro-services that are released to production in a controlled way as part of A/B tests, your engineers will quickly start getting feedback in the form of hard-data points about their assumptions, architectural decisions and implementation choices. Compared to a monolithic architecture where junior engineers are expected to incrementally introduce new functionality by replicating existing behaviors, micro-services give them the ability to quickly learn based on real-world feedback. As a result, the experience your engineers start gaining actually will mean something because they are constantly learning new things and forming their views on good engineering based on real world evidence rather than executing a cookie cutter approach over and over for years. It’s not all rainbows and unicorns All this sounds great and at this point you may be wondering why would any organization not choose to go with micro-services if it has all these positive impacts on engineers. The answer is in the numbers; as your organization scales the number of services from a handful to a few dozen, discovery, documentation, deprecation, tracing and overall coordination among teams require the creation of a well oiled machine. Setting up the necessary systems, processes and encouraging the desired positive team behavior is no small feat. As such, a concious effort needs to be made to understand the needs of teams, find solutions that work internally and then learn from these experiments to improve. I would even argue that if you cannot afford to dedicate the time/engineering management capacity to restructure your organization, create efficient communication paths and introduce systematic improvements you probably should not roll out a micro-services architecture. Simply because, if not tended to, it would be dice roll to expect to reap the benefits I highlighted above.

Managers should not be messengers

This is a pet peeve of mine; everytime I see a manager trying to get something done by carrying the message between folks he manages and upper management, it makes me cringe. If you are an engineering manager and find yourself in this situation; change the company or change your company. You need to ask for things you can manage. What do we mean by “manage” ? A well defined area in which you have accountability and decision making ability. To make sure you are set for success, you should look for a few indicators: What are the boundaries? Is there a clear objective I am managing towards? What is my accountability? What is my decision making ability? These 4 simple questions may give away if you have your manager title but you are merely a communication channel that carries the message back and forth between the decision maker and the engineer. I call these folks messengers. This is not a role anyone signs up for. It usually shapes up to be this way after the fact, due to existing processes and culture of the company. However, engineering managers have the opportunity to change this. Ask for more accountability and decision making authority. If you find yourself in this situation; just ask and take on 1 more area of true management every quarter. You will be amazed how things start changing for the better. If you are a Director or VP of Engineering and you find yourself making these decisions, it is time to change. The more responsibility you give away, the more you gain in terms of direction setting and higher order thinking space.

Dynamic resource allocation in your data center

By now all of us architects are very much used to the idea of spinning a new server in the cloud and scaling our solution horizontally as needed. Virtually nonexistent setup times and a good API provided by cloud vendors makes this a possibility. Infrastructure has come a long way in the form of IAAS and PAAS. Also, open source software has been a great enabler of this movement; namely deployment automation technologies and linux distributions that are customized per use case. The situation in most enterprise systems however, is not that great. It is not rare for me to run into a horizontally scaled solution that is not efficiently utilizing the computing resources. Take the following example; a software solution that serves an end to end business process is created as a single service. When the volume increases on the client, the service is scaled by creating an exact replica and using a load balancer to reduce the load. To elaborate on the issue at hand, I will use a hypothetical car insurance company. As you may already know, there are some basic concepts in car insurance business such as finding a quote that fits your needs, comparing them with competitors, signing up for a policy and finally paying for it. If everything goes well and you don’t get into an accident, your interaction with the insurance company may just end there. Their “service” oriented software may be running on a server that looks like the following: When this company starts becoming successful and gets permanent growth, the current computing capacity may no longer serve its needs. What I mean by permanent growth is a net gain on the number of users. Say, from X to 2X. This growth is a pretty happy and desirable scenario which can be the result of geographic expansion, an acquisition or a marketing campaign. Any sane architect would just double the capacity, distribute the traffic with a load balancer go on with life. The server stacks may look something like this after the capacity upgrade: While this is going on, an X-Ray of the business processes may show us some service resource usage distribution that looks like this. Now lets think about a slightly more interesting scenario. Hurricane Sandy happens. A lot of cars are damaged and as a result the organization is experiencing a spike in the number of claim requests it gets. Since the servers are tuned to handle the current capacity (with a foreseen +/- 5% elasticity) things start slowing down. Customers who are on the phone with the customer service reps start experiencing a longer wait time. Mobile phone based claim submission requests start timing out. Overall, customer satisfaction goes down and there is very little this organization can do about it because they cannot procure and deploy new services overnight and configure their client software to handle the spike. If they had access to cloud resources, they could certainly grow the number of servers temporarily into the cloud and turn them down later. However, in the absence of that there is not much this organization can do. This is exactly why a DRA like framework is needed in the enterprise. Imagine, if this organization could temporarily limit (or even turn off) its capacity to sell more policies and re-allocate those computing resources to the business function that it needs at the moment. The service allocation could look something like this: This re-configuration could save the company thousands of unhappy customers and more importantly ensure that the business resources were utilized to the maximum when they were needed. This concept is not entirely new. There are similarities to Cory Isacson’s Software Pipelines and SOA: Releasing the Power of Multi-Core ProcessingSoftware Pipelines and SOA and cloud computing in general. Some advanced organization with very good engineering teams are able to achieve this scenario by maximizing technologies like Chef/Puppet in the cloud. However, it is not an easily accessible to the common enterprise. What if there was an application/services framework that facilitated this? I believe that a solution like this would truly align the business’ needs with IT capabilities of the enterprise.

enterprise

Dynamic resource allocation in your data center

By now all of us architects are very much used to the idea of spinning a new server in the cloud and scaling our solution horizontally as needed. Virtually nonexistent setup times and a good API provided by cloud vendors makes this a possibility. Infrastructure has come a long way in the form of IAAS and PAAS. Also, open source software has been a great enabler of this movement; namely deployment automation technologies and linux distributions that are customized per use case. The situation in most enterprise systems however, is not that great. It is not rare for me to run into a horizontally scaled solution that is not efficiently utilizing the computing resources. Take the following example; a software solution that serves an end to end business process is created as a single service. When the volume increases on the client, the service is scaled by creating an exact replica and using a load balancer to reduce the load. To elaborate on the issue at hand, I will use a hypothetical car insurance company. As you may already know, there are some basic concepts in car insurance business such as finding a quote that fits your needs, comparing them with competitors, signing up for a policy and finally paying for it. If everything goes well and you don’t get into an accident, your interaction with the insurance company may just end there. Their “service” oriented software may be running on a server that looks like the following: When this company starts becoming successful and gets permanent growth, the current computing capacity may no longer serve its needs. What I mean by permanent growth is a net gain on the number of users. Say, from X to 2X. This growth is a pretty happy and desirable scenario which can be the result of geographic expansion, an acquisition or a marketing campaign. Any sane architect would just double the capacity, distribute the traffic with a load balancer go on with life. The server stacks may look something like this after the capacity upgrade: While this is going on, an X-Ray of the business processes may show us some service resource usage distribution that looks like this. Now lets think about a slightly more interesting scenario. Hurricane Sandy happens. A lot of cars are damaged and as a result the organization is experiencing a spike in the number of claim requests it gets. Since the servers are tuned to handle the current capacity (with a foreseen +/- 5% elasticity) things start slowing down. Customers who are on the phone with the customer service reps start experiencing a longer wait time. Mobile phone based claim submission requests start timing out. Overall, customer satisfaction goes down and there is very little this organization can do about it because they cannot procure and deploy new services overnight and configure their client software to handle the spike. If they had access to cloud resources, they could certainly grow the number of servers temporarily into the cloud and turn them down later. However, in the absence of that there is not much this organization can do. This is exactly why a DRA like framework is needed in the enterprise. Imagine, if this organization could temporarily limit (or even turn off) its capacity to sell more policies and re-allocate those computing resources to the business function that it needs at the moment. The service allocation could look something like this: This re-configuration could save the company thousands of unhappy customers and more importantly ensure that the business resources were utilized to the maximum when they were needed. This concept is not entirely new. There are similarities to Cory Isacson’s Software Pipelines and SOA: Releasing the Power of Multi-Core ProcessingSoftware Pipelines and SOA and cloud computing in general. Some advanced organization with very good engineering teams are able to achieve this scenario by maximizing technologies like Chef/Puppet in the cloud. However, it is not an easily accessible to the common enterprise. What if there was an application/services framework that facilitated this? I believe that a solution like this would truly align the business’ needs with IT capabilities of the enterprise.

graph

graphs all around us

Do you ever think about how data in your organization is connected or what is buried in the tables/rows that your app writes to? Graph databases have been around for a while but for the first time they are becoming so accessible to app developers with products like Neo4j. I have been working with NoSQL databases in the past 2 years but I just started getting into graph databases in the past 6 months and it totally changed my perception of the value of data. My side project/startup Curia is all about connecting the right people at events and conferences. This is where Neo4j came in very handy. Utilizing it as a secondary, purpose-built database allows us to focus on finding the right connections on a graph in blazing speeds. Having attended a few hands-on training courses on Neo4j I can confidently say that every programmer should think about what is locked in the structure of their database and what can be unlocked with the use of graphs. You may be surprised.

hack

Fixing Tmobile's activation form

I was in the process of porting my home phone number to Google Voice and using Obi to connect to my home phone. For those of you who have not heard of it yet; Google Voice offers free calling within the US and Canada and it has many advantages such as ringing many phone numbers. Obi on the other hand is a nice SIP client that can work with Google Voice and other VoIP providers and connect them to your house phone. You can purchase an Obi for about $50 on Amazon. If you currently have land line service with your triple play offering with Comcast, Verizon Fios etc, it is not possible to directly port your number to Google Voice, You need to first port your number to a wireless carrier and then to Google Voice. I chose to do this process by using TMobile. I ordered a Prepaid SIM card activtion kit from Amazon for about $10. Then you need to put this into a spare phone and activate the number. Tmobile has an activation site at http://prepaid-phones.t-mobile.com/prepaid-activate. It is fairly straight forward to go through. However, when you hit Step 5 to fund your pre-paid account. This is where the web site drives you crazy. You literally cannot go forward because of a Form Validation error where it says the Auto-Pay date is not set. Surprisingly, there is no such field on the form! After pulling my hair out, I resorted to a little hack that helped me get through the process. Before I show you what you need to do, here is what is happening: Behind the scenes, the form is populated with an Automatic Payment field. So the server thinks that you are trying to set recurring payments while you are only trying to do a one time payment. What we need to fix is to get into the “hidden” fields of the form and change the value so the server realizes that we are not trying to set up a recurring payment. Here is how to do it; just open the browser with Chrome (or IE) Now hit the inspect element button to start looking at the form: Hit Control + F to bring up the search box in the form: Now search for “autofill” you will something similar to the following: Double click on the “autorefill” part of value=“autorefill” and change it to value=“autorefill1” by adding a number to the end. Now re-fill the rest of the form as you would and submit, you should be all set!

healthcare

HIMSS 2016

I was at HIMSS with EPAM for the past few days. It’s been fun and exhausting at the same time. Here are some observations: Population health has turned into a buzzword. Everybody talks about it but very few companies can actually describe how they will move the needle. Big Data is yet another buzzword. Without clear definition, everybody is claiming they do big data. Collaboration tools; from data aggregation to secure messaging across providers have matured quite a bit. This opens up the opportunity to generate real insights from existing data using analytics and share them with providers and patients to make the largest impact. One area I was disappointed in is the number of digital engagement platforms for patients. They were grossly underrepresented. Here is my prediction for 2016 in Health Tech; Companies that sit on vast data sets will start partnering/acquiring start-ups that can make sense out of the data in specific niches and we will see quite a bit of consolidation. Exciting times in health technology!

italy

Impact of bad UX in the physical world

UX in Italy: 18 screens to buy a train ticket I was in Cinque Terre region in north eastern Italy for a week last month. It is a UNESCO Heritage Site and I definitely recommend it, especially if you like hiking. Part of our plan was to take trains to neighboring towns and do one-way hikes. Every time we approached the train station to buy our tickets, we ended up seeing a long line of tourists and locals queued up, waiting in 95F degree weather frustrated. Each ticket sale through the kiosk took an average of 2.5 minutes from start to finish. To understand why, take a look at the screens I had to navigate through to get two tickets (with no mistakes, or navigating back) For a country that leads the way in design, I was really disappointed to see this user experience. I am hoping that it will be improved for my next visit next year. Here are the screens of the kiosk machine: Step 1. Choose your language Step 2. Just a warning Step 3. What do you want to do? Step 4. Where are you going to? There are no options here but you can tap to type the station name. Step 5. Ok, let’s type the station name. Notice the redundant “Arrival” Step 6. When? Wouldn’t it make sense to list the next few trains on the top of this screen to cover at least 80% of the use cases with one tap rather than selecting a date AND a time slot? Step 7. Wait for it Step 8. Here are a few trains or you can choose to see “all the solutions”. There is no explanation of what “2nd class” actually means. Step 9. Just checking we are still cool Step 10. You actually cannot get an assigned seat, but this screen just pop-up with 0 alternative flows. Step 11. One of the few screens that actually make sense. Step 12. Just checking you are ready to purchase it! I understand the corporate code/business use case, wouldn’t it be easier to get that out of the way in the beginning by understanding the intent? Step 13. A warning Step 14. How do you want to pay? Step 15. Put your card in Step 16. Let us process Step 17. Take your card out! Step 18. Finally take your ticket and go :) Hopefully someone will do something about this sooner than later but if you are planning on a trip to the region and want to take the train factor in an extra 15 minutes or so, since the lines form and grow very quickly.

management

Managers should not be messengers

This is a pet peeve of mine; everytime I see a manager trying to get something done by carrying the message between folks he manages and upper management, it makes me cringe. If you are an engineering manager and find yourself in this situation; change the company or change your company. You need to ask for things you can manage. What do we mean by “manage” ? A well defined area in which you have accountability and decision making ability. To make sure you are set for success, you should look for a few indicators: What are the boundaries? Is there a clear objective I am managing towards? What is my accountability? What is my decision making ability? These 4 simple questions may give away if you have your manager title but you are merely a communication channel that carries the message back and forth between the decision maker and the engineer. I call these folks messengers. This is not a role anyone signs up for. It usually shapes up to be this way after the fact, due to existing processes and culture of the company. However, engineering managers have the opportunity to change this. Ask for more accountability and decision making authority. If you find yourself in this situation; just ask and take on 1 more area of true management every quarter. You will be amazed how things start changing for the better. If you are a Director or VP of Engineering and you find yourself making these decisions, it is time to change. The more responsibility you give away, the more you gain in terms of direction setting and higher order thinking space.

programming

Brushing up on programming

When was the last time you took a week to brush up on the most basic of algorithms and your understanding of data structures? As we get into the rhythm of using our most beloved frameworks, language features and libraries sometimes we do lose touch with the most basic building blocks of computer science. ‘So what?’ You may ask. When you are designing and building software that will be used by tens of millions of people, it is important to understand the trade-offs. A more efficient algorithm can make the difference between a sub-second and 3 second response time on your mobile app or web site. Taking a few hours every quarter or six months to brush up on the most basic building blocks such as DFS, BFS, sorting and checksum algorithms, hash tables, binary trees and tries can help us look at day to day challenges under a new light and come up with alternative solutions. Can you fit 3 hours of brush-up time into your schedule next quarter?

services

Micro Services for engineering growth

Organizational impact of micro-service architecture Using micro-services in your engineering organization is where the hype has been at for the past few years. As others like Martin Fowler have pointed out , building a micro-services architecture based on the hype is probably a bad idea as a true micro-services architecture comes with a whole set of new challenges that probably is not worth it for a good chunk of use cases. Since the technical comparison of monoliths and micro services has been done several times (here and here) I will not go into whether or not you should pick a micro services architecture. Instead, I want to focus on what happens to your engineering team communication, responsibilities and overall impact on your engineering organization if you were to pick a micro-services architecture. Team driven vs. standardized architecture One of the big decisions an engineering organization makes as they go micro services route is to decide whether or not all micro-services will look the same or if teams will be given the freedom to use the tools, languages and architecture that makes sense for them. I am all for experimenting and picking a good fit for team and the engineering organization based on the makeup of the engineering team. As such, I have preferred giving teams the freedom to choose their own implementation strategy as long as the overall ecosystem adhered to a few key rules that impacted ease of communication between services, log collection/analysis and monitoring. Decentralized decisioning By giving teams the ability to pick and choose what they think is best for their use case, engineering organizations essentially start pushing technical decisioning from a centralized model to a more distributed one where decisions are made at the edges. The first thing that this does is to spark more architectural discussions, debates and designs within your team. This creates an opportunity (and a good challenge) for engineers to start communicating their ideas to other engineers more effectively and helps them develop their verbal and written communication skils. Making experience years count If you end up with teams that are autonomous in their design and architecture decisions your engineers will probably end up making some unproven assumptions but as long as these are incremental, small micro-services that are released to production in a controlled way as part of A/B tests, your engineers will quickly start getting feedback in the form of hard-data points about their assumptions, architectural decisions and implementation choices. Compared to a monolithic architecture where junior engineers are expected to incrementally introduce new functionality by replicating existing behaviors, micro-services give them the ability to quickly learn based on real-world feedback. As a result, the experience your engineers start gaining actually will mean something because they are constantly learning new things and forming their views on good engineering based on real world evidence rather than executing a cookie cutter approach over and over for years. It’s not all rainbows and unicorns All this sounds great and at this point you may be wondering why would any organization not choose to go with micro-services if it has all these positive impacts on engineers. The answer is in the numbers; as your organization scales the number of services from a handful to a few dozen, discovery, documentation, deprecation, tracing and overall coordination among teams require the creation of a well oiled machine. Setting up the necessary systems, processes and encouraging the desired positive team behavior is no small feat. As such, a concious effort needs to be made to understand the needs of teams, find solutions that work internally and then learn from these experiments to improve. I would even argue that if you cannot afford to dedicate the time/engineering management capacity to restructure your organization, create efficient communication paths and introduce systematic improvements you probably should not roll out a micro-services architecture. Simply because, if not tended to, it would be dice roll to expect to reap the benefits I highlighted above.

Dynamic resource allocation in your data center

By now all of us architects are very much used to the idea of spinning a new server in the cloud and scaling our solution horizontally as needed. Virtually nonexistent setup times and a good API provided by cloud vendors makes this a possibility. Infrastructure has come a long way in the form of IAAS and PAAS. Also, open source software has been a great enabler of this movement; namely deployment automation technologies and linux distributions that are customized per use case. The situation in most enterprise systems however, is not that great. It is not rare for me to run into a horizontally scaled solution that is not efficiently utilizing the computing resources. Take the following example; a software solution that serves an end to end business process is created as a single service. When the volume increases on the client, the service is scaled by creating an exact replica and using a load balancer to reduce the load. To elaborate on the issue at hand, I will use a hypothetical car insurance company. As you may already know, there are some basic concepts in car insurance business such as finding a quote that fits your needs, comparing them with competitors, signing up for a policy and finally paying for it. If everything goes well and you don’t get into an accident, your interaction with the insurance company may just end there. Their “service” oriented software may be running on a server that looks like the following: When this company starts becoming successful and gets permanent growth, the current computing capacity may no longer serve its needs. What I mean by permanent growth is a net gain on the number of users. Say, from X to 2X. This growth is a pretty happy and desirable scenario which can be the result of geographic expansion, an acquisition or a marketing campaign. Any sane architect would just double the capacity, distribute the traffic with a load balancer go on with life. The server stacks may look something like this after the capacity upgrade: While this is going on, an X-Ray of the business processes may show us some service resource usage distribution that looks like this. Now lets think about a slightly more interesting scenario. Hurricane Sandy happens. A lot of cars are damaged and as a result the organization is experiencing a spike in the number of claim requests it gets. Since the servers are tuned to handle the current capacity (with a foreseen +/- 5% elasticity) things start slowing down. Customers who are on the phone with the customer service reps start experiencing a longer wait time. Mobile phone based claim submission requests start timing out. Overall, customer satisfaction goes down and there is very little this organization can do about it because they cannot procure and deploy new services overnight and configure their client software to handle the spike. If they had access to cloud resources, they could certainly grow the number of servers temporarily into the cloud and turn them down later. However, in the absence of that there is not much this organization can do. This is exactly why a DRA like framework is needed in the enterprise. Imagine, if this organization could temporarily limit (or even turn off) its capacity to sell more policies and re-allocate those computing resources to the business function that it needs at the moment. The service allocation could look something like this: This re-configuration could save the company thousands of unhappy customers and more importantly ensure that the business resources were utilized to the maximum when they were needed. This concept is not entirely new. There are similarities to Cory Isacson’s Software Pipelines and SOA: Releasing the Power of Multi-Core ProcessingSoftware Pipelines and SOA and cloud computing in general. Some advanced organization with very good engineering teams are able to achieve this scenario by maximizing technologies like Chef/Puppet in the cloud. However, it is not an easily accessible to the common enterprise. What if there was an application/services framework that facilitated this? I believe that a solution like this would truly align the business’ needs with IT capabilities of the enterprise.

ticket

Impact of bad UX in the physical world

UX in Italy: 18 screens to buy a train ticket I was in Cinque Terre region in north eastern Italy for a week last month. It is a UNESCO Heritage Site and I definitely recommend it, especially if you like hiking. Part of our plan was to take trains to neighboring towns and do one-way hikes. Every time we approached the train station to buy our tickets, we ended up seeing a long line of tourists and locals queued up, waiting in 95F degree weather frustrated. Each ticket sale through the kiosk took an average of 2.5 minutes from start to finish. To understand why, take a look at the screens I had to navigate through to get two tickets (with no mistakes, or navigating back) For a country that leads the way in design, I was really disappointed to see this user experience. I am hoping that it will be improved for my next visit next year. Here are the screens of the kiosk machine: Step 1. Choose your language Step 2. Just a warning Step 3. What do you want to do? Step 4. Where are you going to? There are no options here but you can tap to type the station name. Step 5. Ok, let’s type the station name. Notice the redundant “Arrival” Step 6. When? Wouldn’t it make sense to list the next few trains on the top of this screen to cover at least 80% of the use cases with one tap rather than selecting a date AND a time slot? Step 7. Wait for it Step 8. Here are a few trains or you can choose to see “all the solutions”. There is no explanation of what “2nd class” actually means. Step 9. Just checking we are still cool Step 10. You actually cannot get an assigned seat, but this screen just pop-up with 0 alternative flows. Step 11. One of the few screens that actually make sense. Step 12. Just checking you are ready to purchase it! I understand the corporate code/business use case, wouldn’t it be easier to get that out of the way in the beginning by understanding the intent? Step 13. A warning Step 14. How do you want to pay? Step 15. Put your card in Step 16. Let us process Step 17. Take your card out! Step 18. Finally take your ticket and go :) Hopefully someone will do something about this sooner than later but if you are planning on a trip to the region and want to take the train factor in an extra 15 minutes or so, since the lines form and grow very quickly.

tmobile

Fixing Tmobile's activation form

I was in the process of porting my home phone number to Google Voice and using Obi to connect to my home phone. For those of you who have not heard of it yet; Google Voice offers free calling within the US and Canada and it has many advantages such as ringing many phone numbers. Obi on the other hand is a nice SIP client that can work with Google Voice and other VoIP providers and connect them to your house phone. You can purchase an Obi for about $50 on Amazon. If you currently have land line service with your triple play offering with Comcast, Verizon Fios etc, it is not possible to directly port your number to Google Voice, You need to first port your number to a wireless carrier and then to Google Voice. I chose to do this process by using TMobile. I ordered a Prepaid SIM card activtion kit from Amazon for about $10. Then you need to put this into a spare phone and activate the number. Tmobile has an activation site at http://prepaid-phones.t-mobile.com/prepaid-activate. It is fairly straight forward to go through. However, when you hit Step 5 to fund your pre-paid account. This is where the web site drives you crazy. You literally cannot go forward because of a Form Validation error where it says the Auto-Pay date is not set. Surprisingly, there is no such field on the form! After pulling my hair out, I resorted to a little hack that helped me get through the process. Before I show you what you need to do, here is what is happening: Behind the scenes, the form is populated with an Automatic Payment field. So the server thinks that you are trying to set recurring payments while you are only trying to do a one time payment. What we need to fix is to get into the “hidden” fields of the form and change the value so the server realizes that we are not trying to set up a recurring payment. Here is how to do it; just open the browser with Chrome (or IE) Now hit the inspect element button to start looking at the form: Hit Control + F to bring up the search box in the form: Now search for “autofill” you will something similar to the following: Double click on the “autorefill” part of value=“autorefill” and change it to value=“autorefill1” by adding a number to the end. Now re-fill the rest of the form as you would and submit, you should be all set!

ux

Conversational UIs are coming

Conversational style UIs are gaining more popularity. In an era where people are living and breathing text messages and have an attention span for only nuggets of information, I foresee these conversational apps to become the norm. I would even argue that as speech and video recognition gets more and more accurate, we will have these conversational apps morph into virtual assistants. Check out this news app that can take a news piece and break it down and make it a conversation. http://qz.com/613700/its-here-quartzs-first-news-app-for-iphone/

Impact of bad UX in the physical world

UX in Italy: 18 screens to buy a train ticket I was in Cinque Terre region in north eastern Italy for a week last month. It is a UNESCO Heritage Site and I definitely recommend it, especially if you like hiking. Part of our plan was to take trains to neighboring towns and do one-way hikes. Every time we approached the train station to buy our tickets, we ended up seeing a long line of tourists and locals queued up, waiting in 95F degree weather frustrated. Each ticket sale through the kiosk took an average of 2.5 minutes from start to finish. To understand why, take a look at the screens I had to navigate through to get two tickets (with no mistakes, or navigating back) For a country that leads the way in design, I was really disappointed to see this user experience. I am hoping that it will be improved for my next visit next year. Here are the screens of the kiosk machine: Step 1. Choose your language Step 2. Just a warning Step 3. What do you want to do? Step 4. Where are you going to? There are no options here but you can tap to type the station name. Step 5. Ok, let’s type the station name. Notice the redundant “Arrival” Step 6. When? Wouldn’t it make sense to list the next few trains on the top of this screen to cover at least 80% of the use cases with one tap rather than selecting a date AND a time slot? Step 7. Wait for it Step 8. Here are a few trains or you can choose to see “all the solutions”. There is no explanation of what “2nd class” actually means. Step 9. Just checking we are still cool Step 10. You actually cannot get an assigned seat, but this screen just pop-up with 0 alternative flows. Step 11. One of the few screens that actually make sense. Step 12. Just checking you are ready to purchase it! I understand the corporate code/business use case, wouldn’t it be easier to get that out of the way in the beginning by understanding the intent? Step 13. A warning Step 14. How do you want to pay? Step 15. Put your card in Step 16. Let us process Step 17. Take your card out! Step 18. Finally take your ticket and go :) Hopefully someone will do something about this sooner than later but if you are planning on a trip to the region and want to take the train factor in an extra 15 minutes or so, since the lines form and grow very quickly.