TestBash Manchester 2018

Testbash manchester 2018 dojo event banner
Event Sponsors:
Training

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Mark Winteringham
Markwinteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


We’re seeing it as an initiative to get people talking more, and perhaps go a bit deeper on some topics. Those topics could be anything, even what you may have heard at the conference. By deeper, we mean many things, such as discussions and debates. Plus more hands-on things such as tool demos, coding and some actual testing. It could be anything.

So the TestBash Manchester open space will essentially take the form of an unconference. There will be no schedule. Instead we, and I really do mean we, all attendees, will create the schedule in the morning. Everyone will have the ability to propose a session, in doing so though, you take ownership of facilitating the said session. Once everyone has pitched their session ideas, we will bring them all together on a big planner and create our very own conference. Depending on the number of attendees we expect to have 5-6 tracks, so lots of variety.

Open Space is the only process that focuses on expanding time and space for the force of self-organisation to do its thing. Although one can’t predict specific outcomes, it’s always highly productive for whatever issue people want to attend to. Some of the inspiring side effects that are regularly noted are laughter, hard work which feels like play, surprising results and fascinating new questions. - Michael M Pannwitz

It really is a fantastic format, it truly allows you get to answers to the problems you are really facing, whereas with conference talks you are always trying to align the speaker's views/ideas to your context, with this format you get to bring your context to the forefront.

Workshops
Morning Sessions

We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with.

However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant by this sentiment. And even those who write production code find that there are unique design patterns and code smells that are specific to test code in which they are not aware.

In this workshop, you will be given a smelly test automation code base which is littered with several bad coding practices. Together, we will walk through each of the smells and discuss why it is considered a violation and then refactor the code to implement a cleaner approach.

In particular, we will investigate the following smells:

  • Long Class
  • Long Method
  • Shotgun Surgery
  • Duplication
  • Indecent Exposure
  • Inefficient Waits
  • Flaky Locator Strategies
  • Multiple Points of Failure

By the end of the workshop, you’ll be able to:

  • Identify code smells within test code
  • Understand the reasons why an approach is considered problematic
  • Implement clean coding practices within test automation

Angie Jones
Angie jones

Angie Jones is a Consulting Automation Engineer who advises several scrum teams on automation strategies and has developed automation frameworks for countless software products. As a Master Inventor, she is known for her innovative and out-of-the-box thinking style which has resulted in 22 patented inventions in the US and China. Angie shares her wealth of knowledge by speaking and teaching internationally at software conferences, serving as an Adjunct College Professor of Computer Programming, and teaching tech workshops to young girls through TechGirlz and Black Girls Code.


Do you work in an Agile, or fast-paced team? Are you often the bottleneck for getting releases out? Do you ever wonder if there isn’t a better way of doing things?

Many testers struggle to find time to do all the testing they want to do. Often, they wish to spend this time collaborating more with Product Managers, designers, users, or other people defining requirements. Maybe you just want more time to improve processes or learn new skills.

It is possible for testers to escape from bug re-checking, and mundane work by enabling the whole team to own testing. Delegating work is the secret to creating time for the interesting, and worthwhile testing.

In this collaborative workshop, you will learn the techniques you need to empower others.

Learn to coach and to be coached. Learn how to ask questions that encourage team members to be more aware and accountable, rather than always looking to you for answers. Develop your ability to pass skills on to others.

By asking good questions, and developing coaching skills you’ll be equipped to lead change in your team.

This practical workshop is aimed at testers who want to improve their testing abilities and their team’s performance.

Learning Outcomes:

  • Better understanding what coaching is
  • How to use coaching as a tester to help get the most out of the people you work with
  • Practical coaching techniques, such as powerful questions, that you can use with the team
  • Greater awareness of the skills you already have that can be applied in coaching
  • What to look out for in a coach and how to get coaching
  • Toby Sinclair
    Toby I’m Toby The Tester however recently I've been branching out into Coaching. I started my career in testing 8 years ago. My testing journey started working for a UK based testing consultancy with a leading retail company. Subsequently I moved to work for a charity who preserve and protect historic places across the UK. Today my journey has led me to the bright lights of London where I help an Organisation transform their working practices. I tweet and blog regularly as TobyTheTester and you’ll recognise me by the cartoon tram from my favourite childhood cartoon, Thomas the Tank engine.

    Do you find yourself frustrated by the lack of challenge in your testing role, managing mountains of test cases, or increasingly aware of the bugs that slip through your net? Adopting Exploratory testing can help relieve these frustrations, but how do you go about performing ET in a way that is effective for both you and your team?

    Join Karo and Tracey for an interactive introduction to Exploratory testing where you will engage in discussions and exercises to learn how to:

    • Describe what Exploratory testing is and it’s value in software testing
    • Question a product or an idea to identify risks
    • Construct test charters based on risks
    • Execute an exploratory testing session
    • Conclude your exploratory testing with a debrief

    By the end of the session you will be able to conduct exploratory testing in a way that is:

    • Structured and well reported to support your team and stakeholders
    • Challenging and engaging for you whilst enabling you to test effectively and with speed

    Tracey Baxter
    Traceybaxter

    Tracey has been a software tester for over ten years. During that time she has specialised in the testing of clinical software including Patient Administration Systems (PAS), Electronic Patient Record Systems (EPR), Clinical Decision Support and Primary Care software in the UK public sector.

    She is passionate about testing and delivering quality solutions that deliver value to its users. You can get in touch with Tracey via Twitter @tbaxter78.


    Karo Stoltzenburg
    Karostoltzenburg

    Karo currently enjoys working as a Senior Test Engineer at Linguamatics, who provides NLP text mining software in the life science and healthcare domain. Before joining the test team at Linguamatics she worked in different industries on E-commerce platforms, web applications and supply chain management solutions, often as the sole tester and in both agile and waterfall environments.

    She loves that testing is such a diverse, creative and challenging activity and cherishes the opportunities for collaboration with other roles in the software development life cycle that come with it. Karo channels her urge to discuss and share anything testing as a co-organizer of the Ministry of Testing group in Cambridge, as a regular at the Cambridge Exploratory Workshop on Testing and through her blog (http://putzerfisch.wordpress.com). Having mentored at the London Software Testing Clinic several times, she’s thrilled to see the Clinic now coming to Cambridge. Find and engage with her on Twitter: @karostol.


    Afternoon Sessions

    As testers we care just as much about our projects and products as our Product Owners and Developers. With a history of being the people at the end of the chain (no longer the case as we know), we tend to see the communication issues and impending risk and somehow always feel inclined or voluntarily willing to help solve those problems and yes, it always means our lives become easier! Win win!

    The Challenge

    So how can we help teach and build up a solid channel of communication with our peers (whom all have different behaviours and characters), to help them stop, zoom out and avoid the rabbit hole? One of the best ways to build an individuals or a team skills and understanding, can often be through the method of games, metaphors and stories. Abstracting the problem and playing through potential solutions can make it easier to identify how to deal with different situations. Lessons are easier taken on board if the learning is also fun.

    The Game Plan

    Our workshop session will comprise of games and techniques, to cover a various range of scenarios for both individuals and teams to take away and use daily within their own teams. The games we’ll teach and play have direct practical uses to bring teams together to work more productively and reduce communication difficulties. These are games that can prove a point in as little as 5 mins, or work through relatively complicated risk mitigation solutions in under an hour. We want you to leave with at least one technique that you can try the very next day at work and to equip yourself by learning about different methods that lead to tangible results around dealing with team dynamics as well as understanding how to share these games with your teams in a fun and collaborative way.

    Nicola Sedgwick
    Nicolasedgwick

    Nicola loves agile, creative, collaborative teams and believes that testers are in a great position to help teams achieve greatness. With 15 years in industry Nicola has experience on both sides of the supplier/customer relationship, with bespoke, off-the-shelf and cloud software, working waterfall or agile and testing across web, native mobile and internal network restricted apps. Throughout all this experience Nicola has found that common elements of transparency, effective communication and a common goal are vital for success.


    Christina Ohanian
    Christina ohanian 1

    Christina is passionate about building and supporting self organising teams and individuals from the ground up. Having started her career in software testing, embedding and building communities of practice she very soon discovered that as much as she loved being a tester her purpose was destined towards a different direction. She is now an Agile Coach and an active member of the Agile community of practice. She loves coaching and learning about people, their passions and what motivates them. She speaks and run workshops and also runs her very own games event #play14 London. Christina is also a Graphics Illustrator and enjoys bringing this into the workspace to run meetings and help teams collaborate.


    Web services and APIs make up a huge chunk of the code in the applications we test, but either we’re not aware of the APIs or we’re asked to focus on the user interface instead. But those APIs are where all of the business logic for the application is exposed, and can hide some pretty nasty bugs. Web services and APIs can be tested in isolation, but they can also be tested in combination with the UI. Understanding how the UI and API work together can make it easier to troubleshoot when things go wrong from the UI. Having this understanding can also create a more complete picture of the application under test.

    In this workshop, we will cover:

    • Why web services and APIs are important to test
    • The differences between common types of web services
    • How HTTP Response Codes fit in to your testing
    • How familiar UI tests translate to API tests
    • How to use Postman to test and share tests with your team
    • How to find the API calls your UI is making
    Hilary Weaver-Robb
    Hilaryweaverrobb headshot

    Hilary Weaver-Robb is a software quality architect at Detroit-based Quicken Loans. She is a mentor to her fellow testers, makes friends with developers, and helps teams level-up their quality processes, tools, and techniques. Hilary has always been passionate about improving the relationships between developers and testers, and evangelizes software testing as a rewarding, viable career. She runs the Motor City Software Testers user group, working to build a community of quality advocates. Hilary tweets (a lot) as @g33klady, and you can find tweet-by-tweet recaps of conferences she’s attended, as well as her thoughts and experiences in the testing world, at g33klady.com.


    Ever wished you had more control over third party API responses? Have you been unable to test specific API responses? Perhaps you’re trying to improve the stability of your automation suites? Have you just started writing unit and integration tests? Maybe you’re building a client for an API that hasn’t been built yet, and you want to get testing earlier? Facing these challenges, mocks, stubs, fakes and spies are essential to testability and can be used both in your automation and as a tool to aid you in your exploratory testing.

    In this workshop, the group will explore mocks, stubs, fakes and spies. You’ll come away with ideas on when these techniques are appropriate, how to gradually build up features in the tools you create to mimic services and you will know just how quick it is to go from idea to working tool. Some programming experience is preferred, but anyone who has an interest in testability will find the workshop rewarding.

    Key takeaways are:

    • Recognise the common terminology used in the stubs, fakes, spies and mocks domain
    • Understand the difference between stubs, fakes, spies and mocks through their characteristics and use cases
    • Apply this foundational knowledge to build a gradually more featured tool to illustrate the journey from stub to fully fledged mock

    The systems we test are massively integrated with many different data sources, and this is only going to increase. With the ability to mimic key services, your dependencies won’t be the bottleneck that stops you from delivering information of value, early and often.

    Christopher Chant
    Christopher chant

    Christopher Chant is a determined and passionate test professional with experience across multiple domains. He has learned to embrace all parts of the development lifecycle as learning opportunities: working in business analysis, development, testing and coaching roles in an attempt to help teams grow and deliver.

    When not testing, Christopher spends his time running (not often enough), traveling all over the country to watch Nottingham Forest F.C. lose (occasionally they win), jealously looking at other people's dogs and playing board games.


    Ash Winter
    Ash Ash Winter is a learning tester, conference speaker, unashamed internal critic, with an eye for an untested assumption or claim. Veteran of various roles encompassing testing, performance engineering and automation. As a team member delivering mobile apps and web services or a leader of teams and change. He helps teams think about testing problems, asking questions and coaching when invited.
    Conference

    “There were a lot of talks that weren’t directly about testing”. This is a real statement from a developer friend of Alex’s who came to Testbash Utrecht.

    And it’s true. We do talk a lot about things like communication, general problem solving and teamwork. Because the role of a tester in teams has been changing for a while and will continue to do so. We’re coaches, encouragers, quality enablers. And that’s an extra level of difficulty in any team - dealing with the sticky human factors alongside the domain knowledge, risk analysis and testing and quality work we do.

    It’s difficult because testers often end up needing strong leadership skills - in teams where they don’t have formal (or informal) authority to do so. And yet we also want to stay away from becoming the quality gatekeepers (again). That path leads to the dark side…

    In this talk, Huib and Alex will take you through some tried and tested Jedi mind tricks to help testers gain the standing they need, for example, 'the invitation game', 'the one place where we should assume' and the 'rule of asking three times'.

    We won’t be advocating for actually manipulating people, but nevertheless, there are concrete steps we can take that can help us if we just believe: Do or do not, there is no try.

    Huib Schoots
    Huibschoots

    Huib Schoots is a tester, consultant and people lover. He shares his passion for testing through coaching, training, and giving presentations on a variety of test subjects. With almost twenty years of experience in IT and software testing, Huib is experienced in different testing roles. Curious and passionate, he is an agile and context-driven tester who attempts to read everything ever published on software testing. A member of TestNet, AST and ISST, black-belt in the Miagi-Do School of software testing and co-author of a book about the future of software testing. Huib maintains a blog on magnifiant.com and tweets as @huibschoots. He works for Improve Quality Services, a provider of consultancy and training in the field of testing. Huib has a huge passion for music and plays trombone in a brass band.


    Alexandra Schladebeck
    Alex

    Alex fell into IT and testing in 2005 and has never looked back. She is now Head of Software Quality and Test Consulting at BREDEX GmbH and spends her time working with test consultants, developers and customers to help them towards great quality.

    You’ll usually find Alex talking about quality and how it affects the whole development process. She’s also a frequent speaker at conferences where she likes to share her project experiences and learn from other practitioners.


    No tester wants to hear a developer say “It works on my machine!” because what it actually says is: “Since it worked on my development environment I assume it also works on your test environment hence you cannot possibly have found a bug."

    We know this not to be true, yet make the same assumption between environments in a later stage: We test our software on test environments and assume that our test results carry over to production. We are actually not testing the software in the setting where our users are facing it.

    To top it off, we spend a considerable amount of money trying to copy production. Managing test environments is often hard, complex and needs a lot of maintenance effort.

    A lot of people are already using techniques, which take testing into production like Beta Testing, A/B Testing or Monitoring as Testing. We intend to push the envelope a little further and additionally move acceptance testing, automated checks or exploration to the production stage. To do so we need to take several things into consideration, such as making sure test data does not mess up productive data and analytics, as well as hiding untested features from customers.

    In this talk you will learn about popular testing in production techniques. We also want to show you some strategies, which help tackling common constraints you will face and provide you with an approach to gradually shift your testing to production.

    Key Takeaways:

    • introduction into popular testing in production techniques like Beta Testing, A/B Testing or Monitoring as Testing
    • strategies for tackling common constraints when trying to test in production
    • an approach how you can gradually shift your testing from testing environments to production

    Marcel Gehlen
    Author

    Marcel Gehlen is team lead for DevOps & Cloud Native at MaibornWolff. He started out as a developer, who always had more fun testing his code than actually writing it and therefore decided to switch careers. Marcel worked in various industries spanning from automotive to customer loyalty programs. After ten years testing software his focus currently lies on test automation and exploratory testing. Marcel currently helps a big customer from the financial sector to transition to Continuous Delivery and to answer the question: “How do I set up my testing in a Continuous Delivery environment?"

    Marcel tweets as @Marcel_Gehlen and occasionally blogs on thatsthebuffettable.blogspot.com.


    Benjamin Hofmann
    Benny

    Benjamin Hofmann works as a DevOps Engineer in Test at MaibornWolff. He has multiple years of experience in software test and engineering. His focus lies in Test Automation, Quality Assurance in an agile context as well as Continuous Testing as an essential part of DevOps. In his current project he is responsible for the test strategy surrounding a microservice architecture.


    Coaching is an almost essential activity in today's software world. 18 months ago, Redgate Software, the company I work for, decided to shift the testing activities and quality responsibility from testers to software engineers.

    The company didn't want to just drop those responsibilities to engineers without some help so they tasked some people to become Quality Coaches, which is my current job title.

    As well as talking out about my journey as a coach, I would like to present an experience report about using the GROW model (from John Whitmore) to coach developers, and other roles including user experience designers, on software testing.

    It's been one of the most interesting techniques I've heard about and studied before putting it into practice. It can be applied in any context and so far I had moderate success with it, although it wasn't all plain sailing and I still have a long way to go.

    Some of the learning outcomes from this talk that I expect attendees to gain are:

    • the different meanings of the word coaching
    • what a quality coach does and doesn't
    • the 4 GROW model stages
    • report on techniques to help overcome barriers that I've found along the way
    • real work situations where the GROW model helped and others where it didn't, and it shouldn't have been used
    • additional resources that relate in particular to the testing context

    José Lima
    Joselima

    José started his professional career as a test engineer at Cambridge-based Redgate Software, and has always been an advocate of quality.

    Last year he became a quality coach in the hope of spreading the lessons he’d learned to the various product teams and software engineers.

    He spends his time studying software testing related topics and working with different teams and individuals around the business.


    Security testing, also known as white hat hacking, is a special art of testing. In this talk I will share my experiences as being a white hat hacker and how it differs from being a software tester in a development team.

    Whereas a software tester is usually involved in the development process, a security tester may see the piece of software for the first time when the audit is about to begin. The information you get beforehand varies from an exhausting documentation overload to complete zero. Sometimes there's even hostility involved - the expectation is that the less you tell, the less security bugs will be found.

    Another example is requirements. Testing usually involves a set requirements to compare to. Security testing on the other hand, may have no original requirements at all (security is an afterthought). There are frameworks to refer to, but you might have to make up your own requirements case by case. Sometimes very weird customer expectations and fears from the developers are sort of additional requirements.

    What goes to similarities, in any testing activity your best reward is the feeling of having filed a critical bug and then verifying the fix. Although I must confess there's this special something when you get that first alert(XSS) popup.

    Key takeaways from this talk:

    • What kind of security related testing you can do with your software without being a pentester or without having any information security background.
    • What to take into account and how to succeed when hiring external security consultants to do security audits or penetration testing.
    • What can you achieve with automation in security testing.

    Anne Oikarinen
    Anneoikarinen

    Anne Oikarinen is a Senior Security Consultant who works with security and software development teams to help them design and develop secure software. Anne believes that cyber security is an essential part of software quality.

    After working several years in a security software development team in various duties such as testing, test management, training, network design and product owner tasks, Anne focused her career fully on cyber security. In her current job at Nixu Corporation, Anne divides her time between hacking and threat analysis - although as network geek, she will also ensure that your network architecture is secure. Anne also has experience on incident response and security awareness after working in the National Cyber Security Centre of Finland.

    Anne holds a Master of Science (Technology) degree in Communication Networks and Protocols from Tampere University of Technology, Finland.


    Communication in agile teams is supposed to be seamless and much better than the “old days”. But how can communication be effective when you are sometimes telling people what they don’t want to hear? Whether you are in an independent testing team and need to communicate more formally, or communicating face-to-face in an agile context, how can you be effective when you are sometimes telling people what they don’t want to hear? What do testers do? They are critics, often of other people’s work; they need to communicate their findings successfully with confidence. In this talk, Dorothy Graham looks at what criticism is, its different types and “DASR”, an effective script for giving criticism (technical or otherwise). Knowing how we respond to being criticized also gives useful insight into criticizing others. Dot covers different types and styles of communication, including Virginia Satir’s communication interaction model, illustrated with an example. Self-confidence can make a large difference in our effectiveness as a critical (in the best sense of the word) tester; both under-confidence and over-confidence can be damaging. It is far more common for women to be under-confident, but there are ways to overcome it.

    Takeaways:

    • criticism is what testers do; it is important to understand how it feels to be criticised and how to criticise well
    • the way in which we communicate is crucial to interpersonal interactions
    • the right level of confidence is needed to be an effective tester

    Dorothy Graham
    2016 dot

    Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation. She is currently working on a wiki on Test Automation Patterns with Seretta Gamba.

    Dot is a popular speaker at international conferences world-wide. She has been on the boards of many conferences and publications in software testing, and was programme chair for EuroSTAR in 1993 (the first) and 2009. She was a founder member of the ISEB Software Testing Board and was a member of the working party that developed the ISTQB Foundation Syllabus. She founded Grove Consultants and provided training and consultancy in software testing for many years, returning to being an independent consultant in 2008.

    She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012.


    In early 2017 I was promoted to QA manager and right off the bat thrown into two recruitment processes. I was terrified. I knew from previous experiences that I am really bad at traditional interviewing techniques and suddenly I could not even hide behind someone else making the decisions. During my career I've interviewed potential colleagues, team members and interns and I've always felt the outcome depended heavily on the candidate’s confidence rather than my questions.

    Our recruitment process included three interviews and three online tests. I felt it tended to favor glossy resumes and interview-trained professionals as well as being biased towards whatever personality type the recruiting manager had.

    I wanted to do something different. Something that used my testing and programming background and that could be used to assess both juniors and seniors on an even playing field. I started out looking for available exercises but the things I found were limited, generic and all focused on testing in front of other people. This also favors a particular type of person and in addition it wouldn’t give me all the answers I wanted.

    • How well do they read instructions?
    • Do they have the guts to question?
    • Can they make reasonable assumptions?
    • How do they adapt to something unexpected?
    • Can they document and communicate their findings?
    • Can they answer questions about their work?
    • ...

    In this experience report I’ll share my thoughts on why traditional interview processes are outdated and I’ll show you an alternative way or doing it. I’ll talk about successes, setbacks and how we plan to improve the exercise moving forward.

    It's about figuring out what makes a tester, how to compare apples to biscuits and how you should always expect the unexpected.

    In short: I will talk about putting candidates to the test.

    Takeaways:

    • Why standard recruitment processes are biased and focus too much on history
    • Ideas on how to improve recruitment processes for testers or other roles
    • How to design a scope small enough to handle but with enough challenge

    Lena Wiberg
    Lenawilberg

    Lena has been in the IT-industry since 1999 when she got her first job as a developer. Testing and requirements have always been a part of her job but in 2009 she decided to take the step into testing full-time and she has never looked back since. Lena has worked as a single tester, test lead, test manager, senior test manager and nowadays she is team manager for the QA division at AFA Insurance. She is also involved with the software testing education in Sweden, both as chairman for one of the schools and by mentoring interns to give them the best internship possible.

    Lena lives in a big house filled with gaming stuff, books, sewing machines and fabric. Gaming is a big thing for everyone in the family and something she loves talking about. Biggest achievement: the Dance Dance Revolution-machine taking up half of her living room-space.


    We used to believe in the statement: “testers should be objective”. This translated to separate test teams because letting testers work together with developers would mess with their objectiveness. Thankfully, we are now working together, but what about testers’ objectiveness? Does that even exist?

    The book “Thinking, Fast and Slow” by Daniel Kahneman has been mentioned at many conferences and with reason! In this book, the myth of objectiveness is debunked scientifically. We take many shortcuts in our brain without us even realising it (fast thinking). In this talk, certain biases like the ‘anchoring effect’, the ‘confirmation bias’, ‘priming’ and others will be mapped to testing. The goal is to accept that we are biased and that we should learn to work with our biases instead of against them.

    This is important if you want to become more aware of your biases and start your journey of self-improvement as an agile team member. This talk is mostly geared towards testers, but some of the biases will also relate to Scrum and agile principles in general. It doesn’t matter how experienced you are, learning (more) about this can never hurt. My goal is that you will be inspired to work differently and know more about your own biases.

    Attendees will takeaway:

    • They will learn what 'fast thinking' and 'slow thinking' means
    • They will learn what 'cognitive bias' means and they will learn how specific biases can influence their testing activities.
    • I will tell stories of how I personally was influenced by biases and how I try to learn from those moments. Hopefully, my examples will be recognised by others, so that the abstract concept of a bias will be mapped to a testing reality for the audience.

    Maaike Brinkhof
    Maaike brinkhof

    I’m Maaike, owner and test consultant at Sensibly. It’s my mission to make other people around me quality infected. I’m in my element when I can teach others about testing and learn more from others about testing. When smart minds synthesise, awesome things happen. My testing approach is always people-centred and thinking-centred, tools and techniques follow.

    After reading the book “Thinking, Fast and Slow” by Daniel Kahneman I’ve developed a special interest in the role of psychology in software development. Over the years I’ve seen countless times how cognitive biases influence thinking in testing and software development as a whole and it fascinates me. I keep re-reading the book and other papers about biases and fallacies and learn new things every time.

    During ‘analogue time’ I like to practice yoga, go for a run, read books, check out new local beers, play my clarinet and travel with my boyfriend. www.maaikebrinkhof.nl www.sensibly.nl


    Serverless microservices are on the rise as organisations adopt this relatively new technology to power new products and applications - Alexa and Chatbots being one popular example of serverless. To some, serverless microservices are the future of architecture implementation and is disrupting DevOps and traditional development & testing models.

    This talk is an experience report on how our team fell into a serverless microservices implementation without any real experience of serverless technology and how we learned about it along the way. I will give a brief overview of what serverless microservices are and its benefits and how we went from sceptics of it initially to advocates along our journey of discovery.

    I will describe our team test first mentality and the multiple functional and non-functional approaches and patterns we applied to serverless, as well as the existing challenges of testing it. That includes how we ran exploratory performance tests to determine and quantify language deficiencies on the serverless platform in order to help drive our implementation approach and the challenges of monitoring serverless applications and observability and why that's important for testing. I will also share some experiments we've run using serverless as a low cost, low maintenance automation utility for testing your api's functionally and non-functionally and how it could help test complex service flows.

    Takeaways

    • What is serverless and why you as a tester should care
    • Experience report on a cloud migration project and why its important to apply an agile team “test first” mentality to new technology adoption
    • How serverless both simplifies and complicates your test approach
    • How to structure test automation alongside high levels of exploratory testing
    • Using exploratory performance testing to help drive an implementation approach for serverless
    • How you can use serverless as a test tool & utility and its advantages

    Conall Bennett
    Conall jpg Conall is a Test Lead at CME Group Belfast working in a cloud migration team involved in the enterprise adoption of DevOps practices, cloud-native technologies and microservice architecture. Prior to this Conall has worked in consulting and in various financial enterprise organisations in a variety of Agile and testing leadership roles. Conall has been involved locally in co-organising the NI tester meet up and contributing to various other local tech meetups in Belfast. He spends most of his time evangelising on Agile and testing practices, tinkering with emerging technologies and trying to find new ways (or rediscover old ways) of causing trouble via testing while exploring new technology.
    Visual testing can help find mistakes before they slip through the net. No longer having to play spot-the-difference to find potentially high-impact mistakes like misaligned buttons that can be selected by Selenium but hidden from a real user, or text and images accidentally disappearing off screen. This session looks at common issues with just relying on end to end automation testing tools, using examples to demonstrate common pitfalls and how visual testing can help add another tool to your tool belt. The talk looks at why we automate tests, the issue with just manually testing, common end to end automation pitfalls, a brief introduction to visual testing and finally a look at common issues with visual testing and ways to overcome them. Through the use of interactive examples, the audience will gain an understanding of why relying on just manual testing can become an issue and how too much automation has a negative impact by looking at testing anti-patterns. The audience will also learn what visual testing is, what tools are available, some of the common pitfalls of using visual testings as well as tips on ways to overcome them based on experience of creating a custom visual test framework at my current employer.
    Viv Richards
    Vivrichards Viv Richards is a senior test automation engineer, a blogger and a community bumblebee. In his spare time he enjoys teaching children to code as a CodeClub volunteer as well as bringing communities together to share skills and knowledge by organising local meet-up's as well as organising South Wales largest agile and developer conference.
    Micro Sponsors: