There are some words in the Quality Analysis field that I grew to dislike: let me start with perfect. Only learning more and growing with time in my career, I realised that we may need to be a bit more careful about how we talk about our field and what are the exact expectations. If we do not, we may be stagnating in siloed tribes of QA departments full of grudge and pain of not getting the perfect software.
Perfect software does not exist – bugs do not mean that quality is bad. What matters is how you deal with bugs, mistakes, and proceed to work.
Even in 2019, at some quality focused (and not only) conferences, I occasionally hear QAs speak how they got into the field because they are perfectionists & they LOVE perfect software. Then I silently cringe, and, actually, feel a bit sad for that – I’ve been there. I was hurt when my reported bugs were not being fixed, that nobody seemed to get me and appreciate my work. However, this often goes hand in hand on how organisation is built, and, what working ways there are implemented.
When all roles collaborate, there is more empathy, more responsibility as well for any kind of an issue that gets noticed. Bugs are not a sign of bad quality, bugs are… inevitable. When we accept that there is no such thing as perfect software, but only software that has no known bugs, for example, when a bug does pop up – we can treat it as a learning opportunity. Work processes like zero-bug policy can help us with that.
In my team – the backlog has no bugs: not because there are none, but because right now we are not aware of any. Zero bug policy for us does not mean that we have zero bugs at all times, it means that once we do discover the bug – it gains the highest priority and gets tackled as soon as possible. Zero bugs are not tackled. We grab the opportunity to learn immediately. An important part here is that we are saying tackled, not necessarily fixed – some may end up being “fixed” as expected, but the decision is always made (now I just remembered that at the start of my career in my bug reports I’d be asked to add what was the expected result, and, that in my current team we do not have that in the reports – we discuss it using other ceremonies & collaboration instead of me telling what should work how).
And, this leads me to one of my main learnings I had:
Some bugs are really not that important: Value of the product may outweigh the bug-free product.
We have to make trade-offs. Sometimes a pixel shift of the UI means nothing even if for us it looks ugly (how many times I fought for bugs I thought were disastrous!). However, it all depends on the product – maybe for a UI-focused product it’s a deal-breaker, but for a different product – it’s nothing.
What changed my career was getting exposure to analytics, monitoring, metrics, logs. Understanding what actually is important to the user is eye-opening. We may think that as QAs we represent the customers, but we may be surprised! Also, having the analytics we can quantify the value of bugs (my article on Sticky Minds about monitoring goes to a bigger depth).
We all do mistakes. High-quality software means that we can recover faster from mistakes, and, are able to handle failure gracefully.
Netflix has shown a great example of failing on purpose with its idea of a Simian Army. And, to quote Cory Bennett & Ariel Tseitlin who said about that:
“We have found that the best defense against major unexpected failures is to fail often.”
Forget perfection – strive for continuous improvement with learning from mistakes, and consistently working on good ways of working with healthy practices adding up on a high-quality product.
P. S. Two books that popped into my head while doing this write-up and I could definitely recommend reading are Perfect Software: And Other Illusions about Testing by Gerald Weinberg, and Accelerate by Gene Kim, Jez Humble, and Nicole Forsgren. The first one touches on the perfection aspect quite a bit on a high-level, while the second one talks more about high-performing teams and how to measure success better.
Looking out of the window in one of oh-so-many New York City hotel’s room, I smiled to the breathtaking Manhattan’s skyline and myself while a line from a song played in my head: “If I can make it here, I can make it anywhere“. I did make it there. I was invited to speak in the city where dreams are made in. There were no shortcuts in this journey, just hard work, and a continuous step by step events that led me there.
Test Leadership Congress is a cozy conference organised by Test Masters Academy. Why cozy? Because the number of attendees is just perfect for getting to know almost everyone, network, and feel comfortable in sharing your knowledge. No big overwhelming auditoriums where the more people there are – the lonelier you may feel.
This year Test Leadership Congress 2019 was right by the buzzing Times Square in NYC in the AMA Conference Center. The building was impressive with the view of flashy billboards (I was just wow, this is where I’m going to speak! My first talk in Europe was in a stadium for Testing Cup 2017, and my first talk outside of Europe is in a skyscraper). Due to its cozy nature – the conference did not take that much of space in the conference center, just a few rooms, and, we definitely did not glimpse through the window as often because of interesting content, and, most importantly, active networking.
First day was a workshop day: you could attend two workshops. I was thrilled as I really enjoy workshops, that’s my kind of a preferred learning experience. The content was really high standard so I had some trouble choosing which sessions I should attend. Same happened with the second day where there were 3 parallel tracks of talks! While the third day everyone got together to one bigger room for keynote-like talks.
To mention just a few (of so many brilliant sessions), here are some of my favorites:
Workshop “Experience Serious Games for Facilitating Quality and Testing” by Eddy Bruin & Jordann Gross was super fun, yet eye-opening (https://www.theseriousgamers.com). There are games that could help us understand the benefits of agile, testing, pair programming & other concepts. In this way, we are not just preaching some kind of practices, but could help the team learn by example. A simple card game could make people realize the benefits of teamwork. What I very much loved was that the workshop not only gave us rather serious learnings by doing fun activities, but also helped us to understand the facilitation part, and how we could ourselves facilitate the games in our teams. What to even mention about their playful way of business (or goal) cards – they use Dixit game’s cards. Also, in general, Eddy & Jordann are just super kind, beautiful souls – if you ever bump into them, talk to them (or join their game sessions), they are very patient & love to share their passion for games.
Another workshop “Quality Outcomes: Driving Change” by Anders Dinsen & Ole S Rasmussen was also really powerful. We spoke about leadership, change & how to drive it by using real-world scenarios from our work, and acting them out using forum theatre. It was in particular interesting to wear different shoes in discussions, try to win arguments, and, most importantly, we could connect so much with fellow attendees. People from all over the world were sharing very similar challenges when it comes to transformation, testing, and leadership.
Tanya Kravtsov in her two parts session “The Game of Continuous Delivery” first shared Audible’s journey towards Continuous Delivery which was so relatable! While in the second part we got to experience playing the Continuous Delivery board game.
Closing Keynote “More Than That!” was just a perfect ending talk. Damian Synadinos is a wonderful speaker and manages to engage every single person in the audience. He spoke about labels we put on others and ourselves, anxiety & learning to be okay with who we are.
And, what about my own talk? I told my personal journey on career changes, burnout, losing my power in authenticity & regaining it in “Finding Power in Authenticity“. It was the most difficult talk to deliver for me so far. It’s the most personal talk where I open up quite a bit. I shared my story in order to inspire, encourage, and support others: just in case they felt the same. After the talk, a lot of people admitted that they could relate to the talk a lot, and, that it was helpful for them to hear me speak. It makes me extremely happy to learn that. Also, one attendee who came from Europe for the conference gave me this note after my talk:
Overall, Test Leadership Congress 2019 was full of wonderful content, yet unpretentious, real & genuine. The main organiser & founder of Test Masters Academy Anna Royzman, in general, is a very kind, yet straight-forward, sincere person who is not about making a show of a conference, but allowing participants learn & shape it themselves. There were spaces for discussions, or even interactive games in groups to decide test strategy of IoT integration products while getting to try them out hands-on. The conference chair Anders Dinsen was also very supportive in this conference: aiding speakers, helping with facilitation & making sure the event goes smoothly. This really helps attendees to network in a cozy conference environment. And, the attendees themselves are just a wonderful set of people. I’m so glad to have been a part of this conference, and, have met such inspiring people!
P. S. New York, I think I do like you quite a bit. First time I was not sure, but this time you did make me consider writing odes for your inspiring vibes, and, even creating a video from my trip fragments.
“We don’t need this E2E test if all teams have their pipelines green” – hearing this made me uneasy and slightly annoyed. I went on a tiny rant about automation, checks, tests, integrations, and how pipeline being green may not mean that the product is perfect. What do I mean with that? I believe that we need to make sure that our test automation is correct, extensive, and meaningful to give us a good foundation for product quality.
With the arrival of DevOps, many companies started adopting Continuous Integration, Continuous Delivery, Continuous Deployment principles: there are green/red pipelines, quicker releases, faster feedback… To make sure we build in the quality, more and more teams are learning about the advantage of creating checks for their code (intentionally, I am avoiding the word tests here, even though many of definitions for specifics are including tests like unit tests, integration tests, contract tests, etc.). If there are enough of automated checks, we would have a better safety net preventing any major issues and allowing us to release faster to production.
With this comes a lot of trust in those checks, though. A lot of people have a tendency to believe the correctness of checks if they are there, and that’s a danger zone.
What if the check for a certain things: a) does not even exist – all the deployment will be green; b) exists, but does not check what it should exactly (for example, the check verifies the wrong assumption and just confirms what was wrongly understood).
Automation checks should be meaningful
Checks should be created correctly, not just for the sake of having them. So, as a result, a healthy test pyramid, has various levels of checks – not only unit tests are included, but E2E tests as well. Their count may be way smaller, but verifying a user journey can be extremely beneficial – the approach goes from user perspective and may reveal some issues which were not covered in lower levels.
Question the validity of checks
You could very easily write wrong unit tests. Imagine that for some reason you have a strong belief that 2 + 2 should return 5, so you implement an addition function which yields exactly what you think is correct, and then you write a unit test to verify it which passes. Tests are green, pipeline screams yay, but is it correct? Not at all. Only human judgement writing the checks can make sense if they are correct. A nice article on the correctness can be found here with this example and more.
Validity is a very common problem I notice, sometimes it could be that the product does not work as expected, but checks created during implementation pass. The passing of them is not as trivial as 2 + 2 equaling 5. Sometimes the mocks used in automation can be silently misleading.
Observe the right level for checks
If you can write a unit test, is that big Selenium suite really necessary checking exactly the same functionality? There may be cases where it is when the product is very UI heavy, but in most of the cases it is very useful to question if test automation we are doing is being done smart, rather than done in order to have something. Questioning levels of checks can be a good start.
Aim for the healthy amount of checks
It is easy to make the pipeline green if there are missing checks – if you never write a test, how can it fail? This reminds me of this meme we once printed for the team I was in:
On another side, we also may over-automate, so we have to balance our checks. How much should we automate? I really like Alan Page’s stock automation phrase (his article which introduced it): “You should automate 100% of the tests that should be automated”.
So, if I had to summarize my thoughts, I’d say:
Instead of looking if pipeline is green or not, implemented test automation should be observed, too: its meaningfulness, correctness, and balance on certain test levels & amount.
Assumptions are a breaking force – if we assume that every team has a green deployment, it does not tell anything about their quality apart from the fact that their written automated checks passed. This does not assure the correctness of the written checks or that in general they make sense and there is a good coverage.
Last year’s Quest for Quality conference had a huge impact on me: it was the best conference of 2017 I’ve attended, I spoke there and I was humbled to have my talk evaluated the highest by the audience, I made wonderful connections with which I’m still in touch up to this day, and being there even initiated my move to another country for a new career challenge!
This year, I had the honour to be a part of the programme committee and help choose the talks. What a hard candy that was! Conference’s theme Reinventing QA for the New IT Era was refreshing, yet challenging: there were so many great talks, but looking back at the theme, I could not see some of them being heard at Q4Q. Knowing well myself how much work and effort goes into each tiny abstract, with a heavy heart I rated the talks as objectively as I could with the information I had. The total of a diverse programme committee’s votings was considered, and, I must say, the speaker lineup was pretty impressing: there were quite a few new voices sharing the stage with the experts, and there were even speakers who normally do not speak at testing conferences.
After being in the programme committee, I also got to enjoy the conference as an attendee. I did not think that 2017 year’s experience I had at Q4Q could be challenged, but I believe it was. Both years Q4Q was in Dublin, and I just fell in love with the atmosphere there, what to even say about the inspiring thoughts I heard at the conference, and wonderful people I met again. After the conference, I was just smiling from ear to ear with the new ideas buzzing in my head.
To give you a glimpse of what kind of an experience Quest for Quality conference is, I’ll touch on the 3 key areas where Q4Q2018 excelled: organisation, content & people.
Proud to #gather the #world in #Ireland: Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Brazil, Croatia, Estonia, Finland, Germany, Hungary, India, Israel, Latvia, Lithuania, Norway, Portugal, Russian Federation, Slovenia, Spain, Sweden, Switzerland, UK, USA 👉 at #Q4Q2018pic.twitter.com/UqbcZwhTgO
Quest for Quality wins hearts with a welcoming atmosphere: many nationalities attending, speaking, and even organising this event. This diversity definitely speaks up to many and makes people feel included. While Dublin is an absolute gem of a city in itself, the beautiful Marker’s hotel as a venue was a lovely addition.
The organisers are very kind and helpful people as well: always there for you, looking for feedback – I still smile remembering one of the main organisers Nikola turning to me to see what I thought of the talks or the event in general. It is important to be with an open heart and willingness to improve.
With all this, even the usually scary filming crew was not that scary at all. The friendly Andjelina was taking interviews and really making sure everyone feels comfortable. She was so kind that even not knowing me much after hearing that I’m coming to Belgrade – she invited me to meet up for a coffee. It still is rather weird for me to see myself in a video testimonial on the conference, but I am sure that it helped a lot to have such good support.
After attending many testing conferences, content for me became one of the very top priorities. Presentation skills matter, of course, but if there is no useful content – what is there from that! I am happy that Quest for Quality met my expectations and differently than some other conferences I attended, it included some topics which were not just testing related – that helped us learn something new, broaden our minds, and just get inspired.
Here are some of the talks that really left an impression on me.
Anna Royzmanfrom Global Quality Leadership Institute delivered a keynote “Test Leadership of the Future: New Challenges, Big Opportunities“. She spoke of challenges we are facing right now, how important it is to be a quality leader, and how our role may actually change. I really enjoyed this talk – I ended up writing down 2 pages of notes. One of the key takeaways I took was how being yourself is powerful: you can help others learn, so you have to embrace that. Also, her 8 principles of modern quality leader are just something that speaks very much to me. It’s important to share knowledge and speak up in order to inspire and coach others.
Davar Ardalan from IVOW told us about “Storytelling in the Age of Robots and Artificial Intelligence”. IVOW is a storytelling agency that wants to create a deeply inclusive AI. Very often when talking about AI we forget the human aspect of that: what about the culture? Wouldn’t it be great to have an AI that can help us recognise cultural events and stories? Or an AI telling us stories about our ancestors? I loved the idea that we have to work on making AI be inclusive and have the context of culture. Also, Davar’s storytelling skills are just wonderful – she has worked at NPR, so hearing her speak alone is quite an experience, what to even mention about such a content.
The closing keynote by Fabian Dittrich from Helpando “Agile living and the future of work: What I learned as the CEO of a nomadic company” was very refreshing and inspiring. Fabian quit his job and decided to work while traveling the world. He shared all the adventures he had and what it taught him. From productivity tools to the fact that it matters more than anything to live a life that you love.
One word: wow. Crazy, beautiful, smart, inspiring, charming people at Quest for Quality. Just to name a few extra things apart from constant networking at the conference: breathtaking discussions on security testing the night prior to the event with Milan and Amela, spontaneous pub crawl in Dublin after the networking event with the most fun people (Stuart – I absolutely adore him, highly recommend to follow him – great things will come out (and some already did like him being featured in The Guilty Tester podcast or his new TestingBants podcast); Lewis; Saga; Pieter; Jarl – you are simply treasures), inclusion talks during breakfast with Niranjani and Gabrijela…
So many topics close to my heart were discussed: testing, culture, machine learning, communication, diversity & inclusion. In Quest for Quality networking goes so smoothly, maybe it’s because it’s such a family-like atmosphere. As for me people is the number one thing that inspires me – this conference was just breathtaking with amazing new friends I gained.
A blurry memory from the wonderful Long Hall during the pub crawl ❤
So, Quest for Quality did it again… With amazing organisation, people & content it yet again was one of the highlights of my year. People matter a lot to me and it is a conference which allows you to meet so many amazing people and learn a lot as well. What could be better! I left the building admiring the skies of Dublin and the neighbourhood with a huge smile on my face – thank you, Q4Q!
“Build it from scratch your own way and don’t let yourself be influenced by the existing system” – words that are rarely said by the stakeholders, right?
It’s been almost a month that I started working on this new engagement. New in all possible ways: we are advised to think of the future and build a future product from the very start without clear directions. This may sound like a wonderful opportunity (and it is), however, it came with an enormous sense of uncertainty. We had no idea what we are supposed to develop, there were 3 teams assigned and for 2 weeks all of us were trying to find out where we should start. There were no user stories or lower-level vision of a product. What is more, all 3 teams are starting with the back-end products which are just a first step towards the goal which will be this huge modern platform to be launched in several years.
The vague high-level vision and no directions from stakeholders led us to worry slightly. With the first checkpoint approaching in a couple of months, we had to try to understand the product more. So, we decided to try out a leaner version of already Lean Inception.
My role is a QA in this project, so being a part of such an early stage is pretty new to me. I took the Lean Inception book, read it over the weekend, and tried to understand the concept more so I could contribute as much as possible in the activity.
Starting the Lean Inception activities
Agile needs some pre-work and inceptions can help to find out about the project more. Usually, they last a few weeks, but because of time constraints, Paulo Caroli created a one-week version to find out what the product vision is, discover features, and define MVP (Minimum Viable Product). During this Lean Inception week these activities take place:
Product vision definition
What product is – isn’t – does – does not
Technical and business review
Defining the User Journey
Display Features in a Journey
Sequence the Features
Build the MVP Canvas
For the same reason as Lean Inception became a shorter version (time constraints!), we also felt like we have very big deadlines approaching and don’t have much time, so two activities were picked to be done by all 3 teams:
Discovering User Journey
Within our teams, we tried to define product vision and do some pre-work. However, it still remained very high-level and teams had to align with their work, so the user journey seemed to be something we really would need. For the Persona definitions, we also invited the client’s departments with which we may need to work in the future. Our ways of doing exercises were not exactly following every step from the Lean Inception concept – we adjusted some exercises to fit us better.
We had only 2 hours scheduled for the Personas description. When all of the participants met, we split into groups of 3-4 and discussed all the possible personas for the product. Firstly it was just like a brainstorm of all possibilities (funnily, some of our personas were other back-end systems because we had to cover those, too, so it was not only people). We needed this to actually start going somewhere – it was hard to think of a usual person using the product, the product looked way more complicated than that.
Each definition of a persona includes nickname and drawing, profile, behaviour, and needs. The exercise was also pretty fun as we could imagine funky characters that may be edge cases of actual user spectrum.
For example, let’s say our product is a ticket booking platform. Then, we should collect all kinds of personas who may use this platform. And some of personas could be: elderly man who wants to buy a ticket to gardening convention nearby (wants it to be easy to use, can be slower than usual, less tech-savvy), a teenager who follows trends and wants to quickly get the ticket to the hip concert or an employee of a tech company who is very tech-savvy and wants to use ticket platform’s API to build their own service for buying tickets, etc.
Then, we discussed the persona ideas we had, combined them (for example, security conscious users or one type of the system, etc.) and assigned certain types of personas to the groups. Back in the groups, we had to think of personas for our assigned category of personas.
This exercise, unfortunately, took us way longer than we thought – we reached our time limit and had to schedule a follow-up session. So, in total, we spent around 4 hours and in the end, we had around 14 personas described.
After persona definition, we did not feel a relief. More of a vice versa – we still had no idea what we are building and personas in our context did not seem very helpful. We could not discover features as Lean Inception recommends. There were certain levels of stress with this outcome. We wanted results quicker and after spending 4 hours on something that did not really help – we were frustrated.
Discovering a User Journey (and Features!)
For the user journey discovery, we decided to involve more people and asked at least a pair of developers from each team to join (persona definition was done mainly with product owners, business analysts, QAs and UX designers).
First of all, after some heated discussions, we decided to choose 2 personas out of 14 only, we split into two groups and tried to come up with user journeys for both of the groups. It was a challenging task, especially, since our personas did not really touch on what we were doing it seemed. So, after two hours yet again we didn’t feel like it became clearer.
After this, we had another meeting to present the user journey to a wider audience. And this was actually extremely useful. What I think helped us a lot was having a great facilitator as well as a big group of people to add their questions.
What we tried to do is to look deeper – user wants to do an action, but what happens on the back-end? What kind of features should we provide for this action to happen?
We used a bunch of post-its to write down our assumptions and also must-do back-end actions for the user to succeed. These felt like features (finally!).
After feature discovery
All three teams met to discuss all the features in the journey and assign them accordingly to the teams who are responsible. This actually helped us to even see the first MVP’s scope clearer. As we had a lot of assumptions and had the story in front of us – we could see where we could already deliver value working all together for the first iteration. All this, left us with certain features which will be split to user stories or become epics within the teams responsible. It was a great relief to finally come up with something.
How did we do it in the end?
As we had not much clarity about the product we were working on, our leaner Lean Inception had these steps and outcomes:
Took way longer than expected and was not as fruitful as we thought it would be.
We ended up with 14 personas and only 2 were used (out of which 1 was enough for the MVP).
User journey discovery
Was very challenging without features to create a user journey as we were not sure what actually should happen.
When reviewing the user journey, we went deeper and added what features we should have for certain steps – that was super useful!
In the end, user journey helped us to actually realize what features we may need for the MVP!
We ended up with MVP which was suggested by the user journey and left the business and tech review to be done within the exact teams.
A lot of us were involved in the Lean Inception for the first time. As a result, there were some learnings on the way. What we tried to do was to save time, but not always it was saving time actually. If we had to do it again, likely we would strongly aim for shorter Personas definition meeting (most of the work done there was not used anyway). Then, what our user journey discovery ended up being was a mix of feature discovery, sequencing features and even helping to form the MVP.
What helped us a lot was in the user journey session to have a great facilitator. Looking back, I realize that a good Lean Inception facilitator is something that can help a lot! If we had to do it again, we would not consider it as a game as we likely did for Personas definition, but rather find a strong facilitator who would be consistent and allow the group to maintain a good pace of outcome and not to deviate away too much.
Also, I feel that sometimes some of us were tired of meetings and would jump to conclusions which weren’t the best, so in the end, I would say these are my 3 summarised learnings:
Strong facilitator helps Lean Inception to have a more structured format and better outcome.
Take your time with Lean Inception activities not trying to skip steps, but also do not deviate too much on tasks which do not provide wanted results.
Do not follow the book blindly. Lean Inception is not always the best format to take to find out about the product – it all is contextual and for your product, some steps may not be as necessary or useful. Try to adjust it accordingly to your needs.
In the end, being a QA and questioning from the start was a very challenging, but also a great experience. I would highly recommend aiming to have a diverse set of cross-team members in the Lean Inception – everyone may have something to add which can build a great basis for the product and clarify any possible misunderstandings.
It was quite a journey. I started as a completely manual tester who could occasionally do exploratory testing. Then, I made a drastic change of transforming my work ethics, learning automation, using monitoring tools and moving my role towards the more generic QA role where testing in production is a part of the quality assessment. And now, with yet again a bigger change in my quality professional’s journey… I promote replacing QA column after development with something like “Desk Check”.
I recently joined a new project engagement where we can build the product from scratch. This means that we also are creating our work culture from the bottom up. It looks like our favorite phrase nowadays is “adaptable to change”. With all this, we are trying to identify the first version of our work board.
When one of our team members automatically added a column called “QA” after development, I suggested to rename it to “Desk Check”. You may wonder why would I do that when I am still a part of the team with a role of QA?
Quality should be in-built, not tested in
Thinking of quality should start as early as the user story or feature is being created. How will we gain confidence that development was successful? What metrics will we use to measure implementation? Can we recover from the worst case scenarios easily? Questioning is a huge part of quality promoting. This should be done throughout the development process before even the desk check.
Desk check is not assigned to any role specifically
If development was successful can be evaluated not only by testers but also product owners or even other developers. Desk check is more of a concept where developers show their work (and their implemented checks), get asked questions, and sometimes pair test. It can be very useful to get a product owner to give feedback on the feature before it is marked as done.
Quality of the product is a shared responsibility
When I suggested using “Desk Check” instead of “QA”, one of the developers smiled and said “Oh, so you’re not a control freak gatekeeper. We all have to be responsible.”. This is exactly what I aim to promote. However, what matters here a lot is also the fact that your team is engaged in this.
Having the attitude that all the team is responsible for quality is quite a task and I won’t say you can do it on your own and change people overnight. You can’t. They have to be willing to work in these ways and it can be very challenging. Being responsible for quality as a developer has certain benefits: you gain confidence about your work’s reliability, learn to question your own work, get to collaborate and understand better other team members like product team, and, actually help with your developer skills to improve the automated checks. The drawback of this is: you need to put effort. Way more effort than if QA is responsible for quality.
In summary, it is a challenging change to actually shift left and not only talk about it. You may find yourself wondering what QA role does if the quality is inbuilt and developers write their own checks… And that’s normal. I did, too. What is important to understand is that teams still need Quality Evangelists to question, promote quality, investigate CI/CD clutter, analyze requirements, tackle misunderstandings and share their testing knowledge with others.
A lot of times testers feel like they are not valued enough or that their efforts are not visible. Good quality usually is an expected outcome so it is hard to show that the role of a tester is actually very helpful and added up to quality improvements.
Possibly the most challenging work environment I had as a tester was becoming the first full time tester in a startup. There was no testing awareness prior to my role. It took time and effort to prove my value, but in the end, when I was changing jobs, some people openly admitted that they felt that I was one of the most valuable people within the company. So, what tips would I have to reach this state and become actually respected and very valued working as a tester?
Open up to learnings and collaborations
Take every chance to collaborate with other team members. It does not matter what their role is – it is extremely beneficial to collaborate and learn from others. Be it a developer, sales person or manager. Be proactive in this – tell colleagues that you’d love to learn more and maybe just shadow then for a while. It can add up a lot to your domain knowledge as well as interpersonal relationships with team members. Sometimes a programmer may even think of you when they are working on a new feature and ask you for your input for unit tests, for example.
Be transparent about what you work on and ask for feedback
If you have daily standups, then during them share the summary of your findings: it has to be concrete and informative. Try to be specific and mention what areas were tested, what was overall quality, if there was a big issue found – feel free to share it. If your organisation does not have standups, try to communicate this information in other channels – be it weekly discussion, plannings or just certain quality reports. Why not to make a quality newsletter? Keep people updated. Also, if you need any help or think that testability was causing some issues – let the team know. Sometimes all the team needs is to know about the pain points in order to help you solve them. Another tip is to arrange regular learning sharing meetings or show-and-tell sessions on what you created – maybe you learned some tricks in test automation or found an interesting bug which had to have a lot of investigation. Let the team know.
Promote pair testing
Do some sessions with developers, managers, product team members or even sales people. It will help them to see your role differently as well as possibly uncover unexpected bugs. Every person has a different set of experience and their usage of the product may be different. A lot of times developers may even sense what parts of the product are buggy, while product or sales people know what is actually important and where to put extra attention when testing.
Use analytics to prioritize and drive your testing
Testing in production is more and more of a thing. It is very important for us as testers to get to know our users. A lot of times we cannot really cover all the test scenarios either, especially in the times of big data and microservices. If possible, get to know the monitoring systems – what is being monitored in production? Can you see what features are mainly used by users? What browsers are your users using? All this data can help you to identify what actually matters. You can then prioritize your testing based on learnings and even include impact numbers to JIRA tickets. For example, you could quantify how important the issue is on IE8 by looking at the analytics numbers for users. Same could be done for functionality related problems. If issue you reported is on IE11 and most of users are using it – it adds extra weight. In the long run, business teams will really respect your input as you will be able to provide quality insights based on actual KPIs (if they are related to user experiences). Ability to do testing driven by user data can help you to provide very well respect insights on quality which could be useful even to the CEO.
Involve yourself in support and customer feedback analysis
If there is feedback functionality or support team for your product, try to get involved there. This will help you to get to know the user and their pain points. Analysing the issues you will learn more about the product and also get asked to join further investigations. This way you will be learning a lot of valuable information about the actual users which will be really appreciated by anyone in the team.
These 5 points really help raise testing awareness and help transmit the value of testing to the company. In the end, we all are working for the same goal of having a high quality product and as testers we promote this mindset.
Recently I have been thinking about the future of testing. More and more I think that the future of a tester’s profession won’t be about the technology choices or even automation, but rather adding a human quality to the products. We will be the ones to stay alert on ethical sides of products, question design, development and usability (ease of use of a product or service).
As a quite experienced question asker, I get to wear multiple hats and collaborate with various departments during the product development. From my experience, I would say as a QA, you get to work with (not limited to only these people of course):
R&D questioning algorithms and their output
UX designers questioning design choices and trying to wear user’s shoes
Business and product teams questioning requirements and acceptance
Development teams questioning implementation
Management questioning priorities
Sales teams questioning domain
All this questioning for me means representing the user. Making sure the quality of the product is satisfactory and user feels good using it. Usability when it comes to feelings is one of the top qualities.
I am not sure if it’s because of my recent thoughts on people vs. products, but I became very sharp on observing the world and, oh boy, how much it hurts when our lives are affected by poor usability and bad design.
Usability and Bad Design Adventures
I was flying into Munich airport recently and remembered one of the most interesting talks I’ve heard on EuroSTAR 2017 “The Sky Is The Limit! – Or How To Test A New Airport Terminal”. In this talk, Christian Brødsjø shared the experiences of testing Oslo Airport. And, of course, it involved people – they had to see the readiness of the airport, the ease to use and the operational abilities. Airport testing is not an easy task, it requires a lot of time and simulation of the actual airport activities in order to see what feedback people are giving and how it would actually work. Nobody wants to repeat the story of the disastrous opening day for Heathrow’s Terminal 5.
When I was searching for more information on the airports, I found many articles on failed airports and even airport representatives admitting that their airports are a mess. This makes me think that I am not alone having bad feelings about airports. Sometimes I need a reminder that bad user experience is something that we should talk about.
In the past month, I had a pleasure of getting to work in the same team with a very caring UX designer Shawn Lukas. We discussed many times how important it is to care about the actual users. A lot of times we don’t even know people for whom we are creating the product – we have to make sure to get to know them instead of guessing or assuming how they are as we are creating something for them. In addition, as users very often we tend to blame ourselves for the product issues. A lot of times we take products the way they are and deal with their imperfections: it may hurt to use them, we may get annoyed, but we stay silent and just try to find workarounds. It should not be this way, the way we feel about products matters and we should speak up.
So, coming back to the Munich airport… It is one of the busiest airports in the world and I am sure that a lot of people worked on making it a good experience and did as much as they could. However, I travel a lot and usually don’t expect much from airports, but certain design decisions left me a little bit annoyed, frustrated and even angry at some points. I am sure that my mum would get lost in that airport – that is not a good sign, because everyone should be able to use the airport. Especially that traveling already is a pretty stressful thing in itself.
How Munich airport managed to trigger my feelings?
Sunday. After waiting at the airport and traveling, I just wanted to get some rest and get out of the destination airport. After landing, I went to go get my luggage. It is a big airport, so gets rather tricky with turns and quite a bit of walking – that’s alright. However, the way to the exit had these things bothering me:
Confusing direction arrow signs. Unfortunately I did not take a photo, but imagine this – there is a space with many escalators, some going up (on the left), some down (straight). There is a sign that baggage claim is ⬆. Does it mean you should go to the left and up or straight down? Apparently you should go straight down even if arrow shows up – learnt it the hard way by first trying to get up.
No indications to explain certain experiences. Finally I get to the little room where I see no more baggage claim signs, but what I see is the train. Train going to other terminals, I assume. I hesitate, look around for more signs or where is the baggage claim as I just want my bag, not to fly somewhere else (even if I wish I could at that point) and an angry airport worker tells me to get on the train. And I tell “I need to get to the baggage claim” and he shows me the train and says angrily “This is the baggage claim”. I am already a bit frustrated by this – how could I know to take the train? So, I murmur back while getting on “No, this is the train”. A little bit of human understanding would be nice in this service: add a note that you need to take the train to get there rather than show the train and tell it’s baggage claim. It’s not. It’s the ridiculous train.
Green signs for forbidden exit. I reached the baggage claim. Got my bag and looked around – it was a big room with windows and doors and could see people walking outside in the parking lot. Would not expect to get out this easily usually – we always have to pass passages and official arrivals are in the airport, however, this time I decide to check if it’s some kind of shortcut because the doors have green signs on them. Only getting closer I see that actually if I opened this, I’d trigger an alarm and it’s just an emergency exit:
Usually forbidden alarm controlled doors or emergency only exits are with red, so why is it green? I walked back from the door and managed to eventually leave the airport in a different way.
This experience I may not have noticed before, I may have taken it for granted or as is, but the more I work in tech, the more I realise that all we do and create is for people. It is not okay to make your users confused with bad design & usability.
Why should we care about usability?
As QAs very often we get to see the whole image of the product/service. This adds a lot of responsibility to aim to feel the same way about the product as our users. The challenge here is that being involved in the actual development we know why certain design/tech choices were done in a certain way, and, this may add a familiarity bias and make us take things the way they are. However, we have to remember that products are developed for certain users and this means that their quality very often will be evaluated by feelings. As the saying goes:
People very often don’t remember what you did, but they remember how you made them feel.
So, make sure to question usability and design. Catch any kind of feelings you may have about the experience and voice them. And, for the best result – get to know the actual users in order to understand their feelings.
P. S. Ironically, in order to write this post I had to login to my wordpress account and I was annoyed a bit again about user experience:
Why would the field say “Email Address or Username” when only username is allowed? I used the correct e-mail and managed then to send a link to the very same e-mail and login via click there (as I could not guess the username field). This just sums up on how you should always think twice about the design: how users will interact with your product and feel afterwards.
There is this one ultimate type of people that I adore the most in my life: smart, but humble. This does not sound like anything rare, right? Yet it is. In tech world there may be tension, competition, even stress-caused forgetting that others are humans, too.
I always get my inspiration from people who want to lift others up rather than bring themselves up. An example here is people who honestly care how you’re doing and actually provide feedback on how you could improve. These people share ideas and their keys to success to anyone who is willing to learn. In my testing career, I was amazed to meet so many professionals wanting to help you improve: be it a colleague programmer willing to share their ideas on how you could create a better automation checks framework, respected experts in the field supporting you on Twitter or sharing their books for free.
This is how I came across How to thrive as a Web Tester by Rob Lambert: I really like Rob’s ideas on testing, especially on the social aspect of it, and a few weeks back I saw his tweet that you could download the book for free that day. I could not miss a chance: downloaded it immediately and actually read it in a few days.
“How to thrive as a Web Tester” is a collection of great tips and lessons learned by Rob Lambert who has been working in testing for more than 20 years. The book has two parts: social aspects of thriving as a tester and techniques on testing websites.
I found both parts great, but the first part was especially speaking to me. Rob shares a lot of realizations about work as a tester which are sometimes related to psychology and communication. Second part related to web testing had many practical tips which are especially useful for someone new in web testing. I really enjoyed reading the book and here is the summary of top 3 ideas I liked.
Top 3 my favorite lessons from “How to thrive as a Web Tester” by Rob Lambert
Be the best tester YOU can be
This point particularly spoke to me. We tend to compare ourselves to others constantly. Then sometimes we get unmotivated that we don’t know as much about something as person X does. Or we are not as smart as them or not as quick or not as good of a public speaker and this goes on… It is time to embrace ourselves for who we are. In the book Rob reminds us to confront our own beliefs on what a good tester is to us, not others. Where should we improve? How can we become the best version of OURSELVES? A brave advice that Rob is giving in the book is to avoid mediocrity in your workplace. In order to become the best version of yourself you must have an environment which allows you to experiment, fail, learn, succeed and grow. This means that you have to choose a healthy workplace which supports your growth.
Ask good questions
High quality questions generally lead to high quality answers. High quality questions are the hallmark of good testers.
I wrote down at least 5 quotes from “How to thrive as a Web Tester” which were related to questions. It really was something I aim to have at my work: ask more and by doing so, be more productive. Sometimes a question on implementation can open a lot of “we haven’t thought of that” and it saves a lot of time for you as a tester, too, because it all leads to conversation instead of many bug reports. In the end, we all are working for the same purpose – to build a high quality product.
It is not always around more testing
Sometimes there are tendencies to automate as much as we can, but this is not always necessary – automate where it makes sense. Also, your work can be more productive and faster if as mentioned above you ask questions and also if you use tools to test quicker.
What I liked a lot in the second part of the book was the suggestion to use various tools to ease testing. Rob has a support page for the book with all kinds of resources accessible to everyone and using tools is possibly a tip I would give to my younger self, too. A lot of times I have filled in text fields manually or done other test data preparation routine tasks which took a lot of time and were pretty error-prone. A way to a more productive testing can be as simple as having an extension on your browser which helps you fill in text input fields. One of my favorites is Bug Magnet Chrome extension by Gojko Adzic.
Reading Seth Godin’s post Perfect vs. important I realized that his idea is very relevant to testers. To rephrase, the main thought of his post is:
Spend more time on making something better (more useful) than polishing it to perfection
When it comes to testing, frequently testers jump into a habit of reporting every minor issue found which leads to quantity vs quality sometimes. Have you ever reported an ugly progress indicator or not the prettiest alignment of UI elements? I have. And I even fought for these to be fixed.
Obviously, UI is important. Distortion bug on IE9 can make you lose customers who use IE9, for example. Ugly UI is not inviting to be used. However, let’s stop for a minute – what is the actual importance of these issues for your product? Are they more important than a security bug where user can access different user’s account by changing their user id in the URL?
Sometimes we are wasting our energy, effort and even nerves with bugs which are for “polishing to perfection” rather than making the product better.
Think for a moment: what is the main purpose of the product?
The art of being a good tester is the ability to ask good questions, so let’s ask ourselves some questions when we test:
Does the product work as expected?
Are there any areas which may cause trouble and were not thoroughly tested?
Does my testing concentrate on making product better or perfect?
Do we (testing + other departments) have time to polish the product to perfection? (If yes – yay, there is time to fix minor issues as well!, if no – then concentrate on the important functionalities)
Sometimes you have to let go of the minor bugs – there are more important features to test/improve. Be smart with your priorities: work on making the product better, not perfect.