Quality is NOT perfection

I’m moving my blog to https://qualitybits.tech – check it out for more content.

There are some words in the Quality Analysis field that I grew to dislike: let me start with perfect. Only learning more and growing with time in my career, I realised that we may need to be a bit more careful about how we talk about our field and what are the exact expectations. If we do not, we may be stagnating in siloed tribes of QA departments full of grudge and pain of not getting the perfect software. 

Perfect software does not exist – bugs do not mean that quality is bad. What matters is how you deal with bugs, mistakes, and proceed to work. 

Even in 2019, at some quality focused (and not only) conferences, I occasionally hear QAs speak how they got into the field because they are perfectionists & they LOVE perfect software. Then I silently cringe, and, actually, feel a bit sad for that – I’ve been there. I was hurt when my reported bugs were not being fixed, that nobody seemed to get me and appreciate my work. However, this often goes hand in hand on how organisation is built, and, what working ways there are implemented.

When all roles collaborate, there is more empathy, more responsibility as well for any kind of an issue that gets noticed. Bugs are not a sign of bad quality, bugs are… inevitable.  When we accept that there is no such thing as perfect software, but only software that has no known bugs, for example, when a bug does pop up – we can treat it as a learning opportunity. Work processes like zero-bug policy can help us with that.

In my team – the backlog has no bugs: not because there are none, but because right now we are not aware of any. Zero bug policy for us does not mean that we have zero bugs at all times, it means that once we do discover the bug – it gains the highest priority and gets tackled as soon as possible. Zero bugs are not tackled. We grab the opportunity to learn immediately. An important part here is that we are saying tackled, not necessarily fixed – some may end up being “fixed” as expected, but the decision is always made (now I just remembered that at the start of my career in my bug reports I’d be asked to add what was the expected result, and, that in my current team we do not have that in the reports – we discuss it using other ceremonies & collaboration instead of me telling what should work how).

And, this leads me to one of my main learnings I had:

Some bugs are really not that important: Value of the product may outweigh the bug-free product. 

We have to make trade-offs. Sometimes a pixel shift of the UI means nothing even if for us it looks ugly (how many times I fought for bugs I thought were disastrous!). However, it all depends on the product – maybe for a UI-focused product it’s a deal-breaker, but for a different product – it’s nothing.

What changed my career was getting exposure to analytics, monitoring, metrics, logs. Understanding what actually is important to the user is eye-opening. We may think that as QAs we represent the customers, but we may be surprised! Also, having the analytics we can quantify the value of bugs (my article on Sticky Minds about monitoring  goes to a bigger depth).

We all do mistakes. High-quality software means that we can recover faster from mistakes, and, are able to handle failure gracefully. 

Netflix has shown a great example of failing on purpose with its idea of a Simian Army. And, to quote Cory Bennett & Ariel Tseitlin who said about that:

“We have found that the best defense against major unexpected failures is to fail often.”

Forget perfection – strive for continuous improvement with learning from mistakes, and consistently working on good ways of working with healthy practices adding up on a high-quality product.

P. S. Two books that popped into my head while doing this write-up and I could definitely recommend reading are Perfect Software: And Other Illusions about Testing by Gerald Weinberg, and Accelerate by Gene Kim, Jez Humble, and Nicole Forsgren. The first one touches on the perfection aspect quite a bit on a high-level, while the second one talks more about high-performing teams and how to measure success better. 

Automate all the RIGHT things

I’ve moved to a new website, to find this and other new blog posts of mine, please visit: https://qualitybits.tech 

“We don’t need this E2E test if all teams have their pipelines green” – hearing this made me uneasy and slightly annoyed. I went on a tiny rant about automation, checks, tests, integrations, and how pipeline being green may not mean that the product is perfect. What do I mean with that? I believe that we need to make sure that our test automation is correct, extensive, and meaningful to give us a good foundation for product quality.

With the arrival of DevOps, many companies started adopting Continuous Integration, Continuous Delivery, Continuous Deployment principles: there are green/red pipelines, quicker releases, faster feedback… To make sure we build in the quality, more and more teams are learning about the advantage of creating checks for their code (intentionally, I am avoiding the word tests here, even though many of definitions for specifics are including tests like unit tests, integration tests, contract tests, etc.). If there are enough of automated checks, we would have a better safety net preventing any major issues and allowing us to release faster to production.

With this comes a lot of trust in those checks, though. A lot of people have a tendency to believe the correctness of checks if they are there, and that’s a danger zone.

What if the check for a certain things: a) does not even exist – all the deployment will be green; b) exists, but does not check what it should exactly (for example, the check verifies the wrong assumption and just confirms what was wrongly understood).

Automation checks should be meaningful

Checks should be created correctly, not just for the sake of having them. So, as a result, a healthy test pyramid, has various levels of checks – not only unit tests are included, but E2E tests as well. Their count may be way smaller, but verifying a user journey can be extremely beneficial  – the approach goes from user perspective and may reveal some issues which were not covered in lower levels.

Question the validity of checks

You could very easily write wrong unit tests. Imagine that for some reason you have a strong belief that 2 + 2 should return 5, so you implement an addition function which yields exactly what you think is correct, and then you write a unit test to verify it which passes. Tests are green, pipeline screams yay, but is it correct? Not at all. Only human judgement writing the checks can make sense if they are correct. A nice article on the correctness can be found here with this example and more.

Validity is a very common problem I notice, sometimes it could be that the product does not work as expected, but checks created during implementation pass. The passing of them is not as trivial as 2 + 2 equaling 5. Sometimes the mocks used in automation can be silently misleading.

Observe the right level for checks

If you can write a unit test, is that big Selenium suite really necessary checking exactly the same functionality? There may be cases where it is when the product is very UI heavy, but in most of the cases it is very useful to question if test automation we are doing is being done smart, rather than done in order to have something. Questioning levels of checks can be a good start.

Aim for the healthy amount of checks

It is easy to make the pipeline green if there are missing checks – if you never write a test, how can it fail? This reminds me of this meme we once printed for the team I was in:

tests.png

On another side, we also may over-automate, so we have to balance our checks. How much should we automate? I really like Alan Page’s stock automation phrase (his article which introduced it): You should automate 100% of the tests that should be automated”.

So, if I had to summarize my thoughts, I’d say:

Instead of looking if pipeline is green or not, implemented test automation should be observed, too: its meaningfulness, correctness, and balance on certain test levels & amount. 

Assumptions are a breaking force – if we assume that every team has a green deployment, it does not tell anything about their quality apart from the fact that their written automated checks passed.  This does not assure the correctness of the written checks or that in general they make sense and there is a good coverage.

Replacing QA Column in the Work Board

It was quite a journey. I started as a completely manual tester who could occasionally do exploratory testing. Then, I made a drastic change of transforming my work ethics, learning automation, using monitoring tools and moving my role towards the more generic QA role where testing in production is a part of the quality assessment. And now, with yet again a bigger change in my quality professional’s journey… I promote replacing QA column after development with something like “Desk Check”. 

I recently joined a new project engagement where we can build the product from scratch. This means that we also are creating our work culture from the bottom up. It looks like our favorite phrase nowadays is “adaptable to change”. With all this, we are trying to identify the first version of our work board.

When one of our team members automatically added a column called “QA” after development, I suggested to rename it to “Desk Check”. You may wonder why would I do that when I am still a part of the team with a role of QA?

Quality should be in-built, not tested in

Thinking of quality should start as early as the user story or feature is being created. How will we gain confidence that development was successful? What metrics will we use to measure implementation? Can we recover from the worst case scenarios easily? Questioning is a huge part of quality promoting. This should be done throughout the development process before even the desk check.

Desk check is not assigned to any role specifically

If development was successful can be evaluated not only by testers but also product owners or even other developers. Desk check is more of a concept where developers show their work (and their implemented checks), get asked questions, and sometimes pair test. It can be very useful to get a product owner to give feedback on the feature before it is marked as done.

Quality of the product is a shared responsibility

When I suggested using “Desk Check” instead of “QA”, one of the developers smiled and said “Oh, so you’re not a control freak gatekeeper. We all have to be responsible.”. This is exactly what I aim to promote. However, what matters here a lot is also the fact that your team is engaged in this.

Having the attitude that all the team is responsible for quality is quite a task and I won’t say you can do it on your own and change people overnight. You can’t. They have to be willing to work in these ways and it can be very challenging. Being responsible for quality as a developer has certain benefits: you gain confidence about your work’s reliability, learn to question your own work, get to collaborate and understand better other team members like product team, and, actually help with your developer skills to improve the automated checks. The drawback of this is: you need to put effort. Way more effort than if QA is responsible for quality.

In summary, it is a challenging change to actually shift left and not only talk about it. You may find yourself wondering what QA role does if the quality is inbuilt and developers write their own checks… And that’s normal. I did, too. What is important to understand is that teams still need Quality Evangelists to question, promote quality, investigate CI/CD clutter, analyze requirements, tackle misunderstandings and share their testing knowledge with others. 

How Does the Product Make You Feel: Usability, Testing & Airports

Recently I have been thinking about the future of testing. More and more I think that the future of a tester’s profession won’t be about the technology choices or even automation, but rather adding a human quality to the products. We will be the ones to stay alert on ethical sides of products, question design, development and usability (ease of use of a product or service).

As a quite experienced question asker, I get to wear multiple hats and collaborate with various departments during the product development. From my experience, I would say as a QA, you get to work with (not limited to only these people of course):

  • R&D questioning algorithms and their output
  • UX designers questioning design choices and trying to wear user’s shoes
  • Business and product teams questioning requirements and acceptance
  • Development teams questioning implementation
  • Management questioning priorities
  • Sales teams questioning domain

All this questioning for me means representing the user. Making sure the quality of the product is satisfactory and user feels good using it. Usability when it comes to feelings is one of the top qualities.

I am not sure if it’s because of my recent thoughts on people vs. products, but I became very sharp on observing the world and, oh boy, how much it hurts when our lives are affected by poor usability and bad design.

Usability and Bad Design Adventures

I was flying into Munich airport recently and remembered one of the most interesting talks I’ve heard on EuroSTAR 2017 “The Sky Is The Limit! – Or How To Test A New Airport Terminal”. In this talk, Christian Brødsjø shared the experiences of testing Oslo Airport. And, of course, it involved people – they had to see the readiness of the airport, the ease to use and the operational abilities. Airport testing is not an easy task, it requires a lot of time and simulation of the actual airport activities in order to see what feedback people are giving and how it would actually work. Nobody wants to repeat the story of the disastrous opening day for Heathrow’s Terminal 5.

When I was searching for more information on the airports, I found many articles on failed airports and even airport representatives admitting that their airports are a mess. This makes me think that I am not alone having bad feelings about airports. Sometimes I need a reminder that bad user experience is something that we should talk about.

In the past month, I had a pleasure of getting to work in the same team with a very caring UX designer Shawn Lukas. We discussed many times how important it is to care about the actual users. A lot of times we don’t even know people for whom we are creating the product – we have to make sure to get to know them instead of guessing or assuming how they are as we are creating something for them. In addition, as users very often we tend to blame ourselves for the product issues. A lot of times we take products the way they are and deal with their imperfections: it may hurt to use them, we may get annoyed, but we stay silent and just try to find workarounds. It should not be this way, the way we feel about products matters and we should speak up.

So, coming back to the Munich airport… It is one of the busiest airports in the world and I am sure that a lot of people worked on making it a good experience and did as much as they could. However, I travel a lot and usually don’t expect much from airports, but certain design decisions left me a little bit annoyed, frustrated and even angry at some points. I am sure that my mum would get lost in that airport – that is not a good sign, because everyone should be able to use the airport. Especially that traveling already is a pretty stressful thing in itself.

How Munich airport managed to trigger my feelings?

Sunday. After waiting at the airport and traveling, I just wanted to get some rest and get out of the destination airport. After landing, I went to go get my luggage. It is a big airport, so gets rather tricky with turns and quite a bit of walking – that’s alright. However, the way to the exit had these things bothering me:

  • Confusing direction arrow signs. Unfortunately I did not take a photo, but imagine this – there is a space with many escalators, some going up (on the left), some down (straight). There is a sign that baggage claim is ⬆. Does it mean you should go to the left and up or straight down? Apparently you should go straight down even if arrow shows up – learnt it the hard way by first trying to get up.
  • No indications to explain certain experiences. Finally I get to the little room where I see no more baggage claim signs, but what I see is the train. Train going to other terminals, I assume. I hesitate, look around for more signs or where is the baggage claim as I just want my bag, not to fly somewhere else (even if I wish I could at that point) and an angry airport worker tells me to get on the train. And I tell “I need to get to the baggage claim” and he shows me the train and says angrily “This is the baggage claim”. I am already a bit frustrated by this – how could I know to take the train? So, I murmur back while getting on “No, this is the train”. A little bit of human understanding would be nice in this service: add a note that you need to take the train to get there rather than show the train and tell it’s baggage claim. It’s not. It’s the ridiculous train.
  • Green signs for forbidden exit. I reached the baggage claim. Got my bag and looked around – it was a big room with windows and doors and could see people walking outside in the parking lot. Would not expect to get out this easily usually – we always have to pass passages and official arrivals are in the airport, however, this time I decide to check if it’s some kind of shortcut because the doors have green signs on them. Only getting closer I see that actually if I opened this, I’d trigger an alarm and it’s just an emergency exit:
    32405524_10216146622644828_8922115278297366528_n.jpg
    Usually forbidden alarm controlled doors or emergency only exits are with red, so why is it green? I walked back from the door and managed to eventually leave the airport in a different way.

This experience I may not have noticed before, I may have taken it for granted or as is, but the more I work in tech, the more I realise that all we do and create is for people. It is not okay to make your users confused with bad design & usability. 

Why should we care about usability?

As QAs very often we get to see the whole image of the product/service. This adds a lot of responsibility to aim to feel the same way about the product as our users. The challenge here is that being involved in the actual development we know why certain design/tech choices were done in a certain way, and, this may add a familiarity bias and make us take things the way they are. However, we have to remember that products are developed for certain users and this means that their quality very often will be evaluated by feelings. As the saying goes:

People very often don’t remember what you did, but they remember how you made them feel. 

So, make sure to question usability and design. Catch any kind of feelings you may have about the experience and voice them. And, for the best result – get to know the actual users in order to understand their feelings.

P. S. Ironically, in order to write this post I had to login to my wordpress account and I was annoyed a bit again about user experience:

Screen Shot 2018-05-13 at 11.59.35Why would the field say “Email Address or Username” when only username is allowed? I used the correct e-mail and managed then to send a link to the very same e-mail and login via click there (as I could not guess the username field). This just sums up on how you should always think twice about the design: how users will interact with your product and feel afterwards. 

Testing to Make Product Better vs. Perfect

Reading Seth Godin’s post Perfect vs. important I realized that his idea is very relevant to testers. To rephrase, the main thought of his post is:

Spend more time on making something better (more useful) than polishing it to perfection

When it comes to testing, frequently testers jump into a habit of reporting every minor issue found which leads to quantity vs quality sometimes. Have you ever reported an ugly progress indicator or not the prettiest alignment of UI elements? I have. And I even fought for these to be fixed.

Obviously, UI is important. Distortion bug on IE9 can make you lose customers who use IE9, for example. Ugly UI is not inviting to be used. However, let’s stop for a minute – what is the actual importance of these issues for your product? Are they more important than a security bug where user can access different user’s account by changing their user id in the URL?

Sometimes we are wasting our energy, effort and even nerves with bugs which are for “polishing to perfection” rather than making the product better.

Think for a moment: what is the main purpose of the product?

The art of being a good tester is the ability to ask good questions, so let’s ask ourselves some questions when we test:

  • Does the product work as expected?
  • Are there any areas which may cause trouble and were not thoroughly tested?
  • Does my testing concentrate on making product better or perfect?
  • Do we (testing + other departments) have time to polish the product to perfection? (If yes – yay, there is time to fix minor issues as well!, if no – then concentrate on the important functionalities)

Sometimes you have to let go of the minor bugs – there are more important features to test/improve. Be smart with your priorities: work on making the product better, not perfect.

Dear tester! Others care about quality, too.

Dear tester,

I know that sometimes it feels like people you work with just want to mark the cards in JIRA as Done without proper testing. Sometimes they tell others “…once it passes the QA” or create tasks for you subjected “QA X” like it is being done just because “they have to”. The feeling of annoyance caused by urgency to complete the task immediately without reporting any bugs is inevitable because of the way they tell just to “pass the QA”. I did write of that before on why “pass the QA” makes me cringe, so I can feel your pain really well. Especially, that even after me trying to explain QA vs Testing vs Checking many times in my company and clarifying, the very same wording is still being used. I would like to share with you a story that happened to me which made me think that sometimes we exaggerate a little bit assuming that others do not care of quality as much as we do.

Today one of developers in my company came back with initial implementation and results to “QA”. The task was fairly simple – there is a lot of data generated by an algorithm and we should check it (I’m using here check consciously as it’s not really testing at this point): does it make sense, what patterns of fault we notice, does algo actually work? All of this should evaluate the quality of results produced by this new algorithm.

The wording of this task’s formulation and documentation with the data had QA mentioned around 5 times in various forms and I’m sure you are familiar with most of them: “data to be QAed”, “for the QAing” or my least favorite “pass the QA”. These terms do not feel too good as they are not correctly used and it may feel slightly insulting sometimes that your colleagues may not bother to even understand what you’re doing. However, you cannot teach all people to use the terms and it’s important to let it go sometimes. Remind yourself that we all have biases (and I do have a story on Managing your biases which made me slow down a little bit before judging). I decided not to exaggerate and think from that developer’s point of view: we both know what he wants as a result – the quality should be evaluated even if he is using the wrong terms.

Some colleagues may use the wrong terms and confuse testing/checking/QA, but don’t go and nit-pick on that. Words matter, but not everyone cares either how to name the rose: all you can do as an empathic quality specialist is to show people that you are open to explain to them, but only if they want to. 

This is not why I’m writing to you, though – this colleague of mine may have used the wrong wording, but letting go of that wasn’t the main takeaway I got.

When the colleague created the task description, it lacked one thing: any description of implementation details of algorithm. No documentation was yet created, no code mentioned, only thing provided was the generated data and vague explanation what should be done (compare columns and say if it’s okay or not using some human sense and research on each of options).

I really wanted to see implementation details: how else can I assess actual risks? Maybe there are areas and patterns that are design flaws and can be seen before even looking at the data generated. This developer tends to work alone as well, so there isn’t much of code review going on.

When I asked if there is any documentation on this algorithm, this was the response I got from the developer:
“Not yet, this is not ready for production yet. When it passes QA there will be a documentation page with all the changes that have come out of the QA process.”

This wasn’t something that I expected to be honest – I replied that to do the QA process we need to know the implementation details and this shouldn’t be made visible only when the algo goes to production. We shouldn’t check in the dark.

My reply has made this developer write to me personally and the words that were used by them again were a little bit rough I could say. The arguments on why the documentation wasn’t created were that “it is too big overhead” and then eventually “it seems that we disagree on the QA process here: for this task, there is no need for implementation details”. How would you react to this, my dear tester? Developer is claiming that as someone who is hired to test and give quality evaluations you shouldn’t look at implementation details at all.

As someone who recently encountered several design flaws in built products which caused issues and could have been spotted years ago, I felt ridiculed. Of course testers or QA (whatever way people want to call these specialists) should see implementation details. Is this developer really thinking that their design and implementation is perfect that we should look just at the results produced?

Issues can be spotted when getting to know algorithms and implementation: you may spot a logical error which causes certain bugs before you even look at the data obtained from running the algorithm

I stood my ground then, though. I tried to explain that I would love to see the implementation because it will help me to do the “QA processes” faster, more efficient and may display me some of issues before I actually look at the data. I want to be familiar with what it is actually doing.

And, to my surprise, it worked. This very same developer who was fighting that QA does not need any details on implementation shared with me the code they wrote to produce the results. It turns out that they thought I needed detailed documentation, but even code was enough which could easily be provided.

In the end, I realized that I could have given up. I could have closed myself up and exaggerated thinking that it’s only me who cares about proper quality judgement and people just assign tasks blindly without even considering that there may be issues in their logic of implementation. I could have felt hurt by the words used and impressions I got from this person, but in the end, even if we spoke in different terms, we both aim to finalize quality assesment (not to pass the QA, just understand if this implementation is good enough). I stood up for myself trying just to do my job better and I got help even if it took an extra step.

So, dear tester, believe that your colleagues are there to help you – you all want your products to be successful and of great quality. It is not only you, just sometimes others don’t know what you exactly need to do your tasks – open yourself up and ask for it. Only by sharing your needs and communicating you can make others understand your tasks better. 

 

 

 

Why phrase “pass the QA” makes me cringe

To find this and other new blog posts of mine, please visit my new website: https://qualitybits.tech 

Today I heard someone say “pass the QA” and in this post I will share why I believe that we should cross out this phrase from all dictionaries where it is included because it is just wrong use of definitions.

Let’s break this phrase into two parts: QA and pass.

What is QA?
I am talking here in a sense of quality assurance. Okay, that sounds clear, however, what is quality assurance?

A lot of people mix up QA and testing on a daily basis. There have been various discussions about it and I mostly lean towards the point of Michael Bolton in his post Testers: Get Out of the Quality Assurance Business. It was an eye opener blog post for me: my first job as a tester even had it in the title “Software Quality Assurance Analyst”. Later on, I turned into QA Engineer. However, I am a tester.

Michael Bolton in that blog post gives so many valid points, it’s like a gold mine. It is one of my favorite ever posts about testing. It basically stresses that as a tester you cannot really assure quality. You just inspect it and help to improve it. You test.

QA is not a person or the department of certain type of professionals. It is a task of everyone in the company to work towards assuring quality. Tester may play a huge part in it, but the actual “action” assurers of quality usually are programmers because they actually are making changes to the quality level. And, let’s not forget the main part:

Assuring quality is an ongoing task/goal of the company.

What does it mean to pass?

In testing passing the test means that the test has passed based on its acceptance criteria.

Test may be built from multiple specific test cases or lead by charters. However, the defined acceptance criteria should be clear.

Passing of tests could be related to the common question in testing: how much is it enough to test? Sometimes the answer is not that obvious. There may be various scenarios, explorations to be made and a common standard should be discussed with product management team on what are the requirements and if edge cases should be addressed for the initial release/iteration.

Why don’t I like the phrase “pass the QA”?

After explaining both parts of this phrase, I can say that for me saying “pass the quality assurance” makes almost no sense.

Quality assuring is an ongoing task, so it is never going to end. You cannot pass the quality assurance as it is, but you can pass the test.

I do understand the intent of this phrase and why it was used: it was meant to say that testing will be completed with no show-stopper issues and will pass the acceptance criteria.

Let’s not underestimate the power of wording. Saying “pass the QA” can definitely be misleading. However, sad news are that this term is quite popular to describe the teams of testers. In this case, let’s spread the awareness of the differences between QA and testing – we all are doing QA in the company, but only testers do testing as their full time job (programmers do a fair deal of testing as well, but it is not their main responsibility usually).

Testing challenge was amazing! – Day 31 of 30 Days of Testing

When I decided to join 30 Days of Testing challenge, I did not have much expectations. Now, after a month of sticking to it honestly and sincerely every single day, I can admit that I have learned a lot and met some wonderful people on the way!

Here is the list of all challenges with links to the posts I wrote about them:

  1. BUY ONE TESTING RELATED BOOK AND READ IT BY DAY 30:
    Bought and read “Explore It!” by Elisabeth Hendrickson
  2. TAKE A PHOTO OF SOMETHING YOU ARE DOING AT WORK:
    What I’m doing at work
  3. LISTEN TO A TESTING PODCAST:
    Testcast Podcast “Testing is Dead”
  4. SHARE A TESTING BLOG POST WITH A NON-TESTER:

  5. READ AND COMMENT ON ONE BLOG POST:

  6. PERFORM A CRAZY TEST:

  7. FIND AN ACCESSIBILITY BUG:

  8. DOWNLOAD A MOBILE APP, FIND 5 BUGS AND SEND THE FEEDBACK TO THE CREATOR:

  9. CREATE A MINDMAP:

  10. FIND AN EVENT TO ATTEND (ONLINE OR FACE TO FACE):

  11. TAKE A PICTURE OF YOUR TEAM:

  12. DOODLE A PROBLEM:

  13. FIND A USER EXPERIENCE PROBLEM:

  14. STEP OUTSIDE OF YOUR COMFORT ZONE:

  15. FIND A PROBLEM WITH AN E-COMMERCE WEBSITE:

  16. GO TO A NON-TESTING EVENT:

  17. FIND AND SHARE A QUOTE THAT INSPIRES YOU:

  18. FIND A BROKEN LINK. AND REPORT IT:

  19. FIND AND USE A NEW TOOL:

  20. FIND A GOOD PLACE TO PERFORM SOME SECURITY TESTS:

  21. PAIR TEST WITH SOMEONE:

  22. SHARE YOUR FAVOURITE TESTING TOOL:

  23. HELP SOMEONE TEST BETTER:

  24. CONNECT WITH A TESTER WHO YOU HAVEN’T PREVIOUSLY CONNECTED WITH:

  25. CONTRIBUTE TO A TESTING DISCUSSION:

  26. INVITE A NON-TESTER TO A TEST EVENT:

  27. SAY SOMETHING NICE ABOUT THE THING YOU JUST TESTED:

  28. SUMMARISE AN ISSUE IN 140 CHARACTERS OR LESS:

  29. FIND AN OUT BY ONE ERROR:
    Off-by-one error hunt

  30. GIVE SOMEONE POSITIVE FEEDBACK:
    Give someone positive feedback

     

     

How I helped the account manager to test better – Day 23 of 30 Days of Testing

HELP SOMEONE TEST BETTER

Quality of a product is a very important topic to anyone in the company, so ability to quickly check real-time status of major product’s KPIs is a great skill to have.

In my company, very often some of the checks are done by our account manager. Imagine a situation where a partner writes in with a problem with web-service which your company is delivering. Account manager is usually the first person to read this, check the problem and reply to the partner. In other scenarios, account manager may just need a quick check of certain partner service quality before making any kind of communication step with them. Have in mind that sometimes other colleagues are not able to help immediately, so account manager would clearly benefit from knowing how to execute some checks independently.

Our account manager is a great learner and has been very open to get to know more about the product, so we arranged a meeting to talk about my beloved monitoring tool – New Relic.

I have explained to him some basics of New Relic: where to find dashboards, how to see queries behind each dashboard, how to make your own queries. We created some queries together to check a few important questions and modified existing ones to get the grasp of how to build queries if you don’t know where to start.

Of course it will take some time for the account manager to get used to this new tool, but I’m sure that he got a good background now and will be able to check out some of important questions real-time.

It was very fun for me to share knowledge and we even found out together that one of device types for our web-service in the last 2 weeks was a car, specifying which car it was we learned a fun fact – our web service was used by Tesla car owners.

 

Where to start if you want to be a speaker? – Day 14 of 30 Days of Testing

STEP OUTSIDE OF YOUR COMFORT ZONE

After attending a few conferences, I realized that a lot of us may feel Impostor syndrome when it comes to professional knowledge. Its definition in Wikipedia states:

Impostor syndrome is a term referring to high-achieving individuals marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”.

This should not happen, because we all have unique stories and career advice to share. So, why not to step outside of the comfort zone and become a speaker?

In one of the conferences I attended the after-party and a couple of speakers shared their road to becoming a speaker. What inspired me the most was that they stressed that you already have a “No” by default, so why not to try applying to the conference and suggesting your topic until you get a “Yes”? It cannot get worse because you already have a “No”. This is how many speakers actually became speakers – by trying to use each chance they have and not giving up.

Working on becoming a speaker has been recent stepping outside of the comfort zone for me. I still need to work very hard and actually achieve my goal, but I have collected a few valuable tips which can help you to get started as well.

  1. Think of your knowledge and passions to decide what topics you could talk about.
    As a professional you definitely have tools or methodologies you use. These could be easily transformed into a useful presentation for others. Especially, if they are made of advice and tips on how others could improve their work. Some examples that I have noticed often are “Responsive Design” or “How to improve your Selenium tests”. These topics are specific enough and a lot of people may be interested in hearing more about it. Also, you could think of something which is not that common but you are passionate about. It can be your own success story, or, area that interests you. For example, maybe you’re passionate about Internet of Things and could share what you’ve learned about its testing so far?
  2. Read and watch online resources
    Don’t be afraid to search for information online on how to become a speaker. Recently I found a great blog post on how to write an appealing abstract for the conference, and, noticed a soon approaching webinar on Conference Speaking 101 by Lee Copeland. These resources are free, but definitely very valuable.
  3. Find a mentor
    There are many speakers who are willing to help you to become a speaker. One of the great ways to find them is to apply for Speak Easy programme which is free. There you get an experienced mentor who helps you to build your talk and gives you a lot of useful advice. I have joined it and my mentor has been giving great thought-provoking feedback (some of it inspired this blog post). Also, recently Maaret Pyhäjärvi  volunteered to help mentor new speakers and organize Signature Webinar Series with Ministry of Testing.
  4. Practise speaking in your company and local events
    The best way to get experience in public speaking is to start little by little and grow your audience. In my company every two weeks we organize a Secret Gathering where anyone from company can share their discoveries or work details if they want. This helps both to understand the team better and also practise your own presentation skills. When you are better in presenting in smaller groups, why not to give a go giving a speech in a local meetup? If you’re in Budapest and would like to talk about something QA related – feel free to contact me and join Budapest’s QA meetup as a speaker.
  5. Don’t give up
    Becoming a speaker definitely requires a lot of patience and hard work. I am sure that I will first get multiple rejections before getting an invitation to actually speak. However, only trying we can actually achieve our dreams.

Testing community is great and the more people we get involved in it – the better. Join me trying to step outside the comfort zone and bringing in some new voices to conferences.