TestBash Brighton 2018

Testbash brighton 2018 adverts dojo event banner
March 15th 2018 - March 16th 2018

TestBash Brighton, the home of testing friends, is back for a whole week of jam-packed learning and fun! And we’re returning to The Clarendon Centre for a second year running.

This year is a 2-day event with some additional pre-training courses and an open space thrown in for good measure.

For the pre-training courses, we’re hosting a shiny new 3-day Automation in Testing course with Mark Winteringham and Richard Bradshaw from Monday the 12th - Wednesday the 14th March. We’ve also got a one-day Lean Testing course with Aaron Hodder on Wednesday 14th March.

The Thursday is our workshop day, with 10 half-day workshops to choose from. As an extra bonus, we also have 2 talks!

The Friday will be a chock-full, single-track conference consisting of 9 talks and The UnExpo (attendee-driven expo), followed by a fun-filled tester get together!

On Saturday, we will be hosting our Open Space, a fantastic opportunity to create your own schedule with the other attendees to talk and learn about the topics that interest you.

On all the days, you can expect to find a wonderful community coming together in a friendly, professional and safe environment. We think you'll feel right at home as soon as you arrive!

Expand the talks below to get more information or click register now! Conference day starts from £299 inc VAT, include the workshops from £699 inc VAT. Tickets are limited!

Training
Monday, 12th March 2018:

Automation in Testing - 3 Day Course - 12th-14th March 2018
Richard Bradshaw & Mark Winteringham

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online video-based courses to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Mark Winteringham
Markwinteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Wednesday, 14th March 2018:

Lean Testing - 1 Day Course - 14th March 2018
Aaron Hodder

Summary

Do you think that testing can be more than writing and executing test cases? That there is more to testing than just checking requirements? Kick your testing up a gear and discover how challenging and invigorating advanced testing can be. Lean Testing teaches a collaborative and investigative test approach that will reignite your passion and improve how you test.

Description

This course focuses on how to make testing more valuable by teaching testers how to test better. This intensive one day course introduces a variety of proven techniques for testers to complete their work faster and to a higher standard, using critical thinking and advanced test analysis techniques. The course covers the whole test process, from planning to execution and gives practical examples at each step.

Learning Outcomes

While mastering testing skills will require both time and practice, this workshop will allow you to:

  • Apply heuristic test techniques
  • Develop and critique visual coverage models
  • Recognise and explain the oracles used to identify bugs
  • Write specific and effective test charters to focus exploratory testing
  • Gain awareness of how to write descriptive test reports and see examples of lean testing documentation
  • Conduct feature tours to produce state transition diagrams
  • Use Hendrickson variables to identify variance classes
  • Gain knowledge of combinatorial test design
  • Identify cognitive biases that can affect your testing
  • Advocate for exploratory testing in your test approach

Agenda

All participants apply the concepts introduced through extensive practical activities that allow them to experience everything learned. Interactive testing exercises encourage group collaboration and get attendees working hands-on testing real software.

Prerequisites

A laptop with a USB drive that you have administrator access to.

Aaron Hodder
Nnoyar9g 400x400

Aaron Hodder hails from Wellington, New Zealand, where he works for Assurity Consulting to coach testers to develop and deliver new and innovative testing practices to better suit the demands of modern-day software development.

Aaron is a passionate software tester with a particular enthusiasm for visual test modelling and structured exploratory testing techniques. He regularly blogs and tweets about testing and is a co-founder of Wellington Testing Workshops.


Workshops
Thursday, 15th March 2018:
Morning Sessions

[TALK] - How to Bring Change Without Permission
Bhagya Perera

This is not a workshop. This is a talk that all attendees of the workshop day will attend in the main auditorium.

Sitting at your desk, testing another user story, you think of changing the process by adding more automated scripts to free your time for more exploratory testing. You propose end-to-end test automation on top of the small scale automation you already have, only to realize the attitudes are “that will not work here, we tried it before”. You still believe in it strongly, so where to go when you’re told not to do it?

In this talk, I will share my story of being the tester whose proposal got turned down for previous bad experiences. Instead of not doing the needed end-to-end automation, I made sure I did not compromise my regular work but still make time for working on this. Showing it can be done changed “it won’t work” to “This is really helpful”. Join me for learning how to break barriers for the things you believe in.

Takeaways:

  • How consistent use of an hour a day enables you to do things you feel strongly about without compromising your other work
  • Importance of believing in yourself when others in the team are less aware of your abilities
  • How showing the results wins over past experiences of the team

Bhagya Perera
Bhagya

Bhagya Perera is a software test analyst for almost 10 years after changing the career from being a software developer. She is originally from Sri Lanka, currently lives, and works in England.

Bhagya is passionate on testing and brings experience on working in multi-cultural distributed teams as well as experimenting on automation and manual testing. In addition, she has strong interests in leadership, effective communication and mentoring that broadens her knowledge.


Exploratory Testing 101
Dan Ashby & Mark Winteringham

Do you find yourself frustrated by the lack of challenge in your testing role, managing mountains of test cases, or increasingly aware of the bugs that slip through your net? Adopting Exploratory testing can help relieve these frustrations, but how do you go about performing ET in a way that is effective for both you and your team?

Join Dan and Mark for an interactive introduction to Exploratory testing where you will engage in discussions and exercises to learn how to:

  • Describe what Exploratory testing is and it’s value in software testing
  • Question a product or an idea to identify risks
  • Construct test charters based on risks
  • Execute an exploratory testing session
  • Conclude your exploratory testing with a debrief

By the end of the session you will be able to conduct exploratory testing in a way that is:

  • Structured and well reported to support your team and stakeholders
  • Challenging and engaging for you whilst enabling you to test effectively and with speed

Dan Ashby
Danashby Dan is a SW Tester and he likes Porridge! (and whisky!)
Mark Winteringham
Markwinteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Get Beyond the Surface: Exploratory Testing of Web Services
Amber Race

Much of exploratory testing tends to concentrate on the client UI. But the same strategies that work well at the UI level can also be applied to the web services and APIs that power your application. Exploratory testing your web services has multiple benefits. By exploring your APIs you can:

  • Find critical bugs earlier in the development cycle
  • Gain a deeper understanding of how your application or feature works
  • Have greater confidence in your UI level testing, knowing that the services underneath are solid

In this hands-on workshop using both paper and laptop exercises, participants will learn:

  • How to ask the right questions about services and API design
  • Freely available tools for exploring web services
  • Tips and tricks for finding service-level issues

Get beyond the surface of your applications and discover how fun and rewarding API testing can be!

Workshop attendees will learn how exploratory testing can extend beyond the application layer so they can find and potentially fix issues earlier in the development cycle. The workshop will include multiple activities so that attendees can gain confidence in their ability to test web services.

Amber Race
Amberrace headshot

Amber Race is a Senior SDET at Big Fish Games.

After majoring in Asian Studies, teaching in Japan, and travelling the world, she stumbled into software testing and has been loving it ever since. She has over 15 years of testing experience at Big Fish and Microsoft, doing everything from manual application testing to tools development to writing automation frameworks for web services.

Amber has worked on a wide variety of products and written automation in C#, C++, Python, and Java.

She currently specialises in test automation and performance testing for high volume back-end services supporting iOS and Android games.


How To Interview Like A Tester
Martin Hynie & Elizabeth Zagroba

So many interviews are about selling yourself. It feels disingenuous, misleading. The candidate is saying what the interviewer wants to hear. The candidate doesn’t get a good idea of what they’re expected to know on day one. They want to know what it’s like to work with those people on that team everyday. The interviewer’s touting the company line and asking questions about a resume stacked with buzzwords. They want to parse out how much the candidate witnessed vs. contributed to a project. Interviewing is typically a one-way street. But it needs to go both ways.

Pair testing in interviews gets rid of this artifice. Martin and Elizabeth join the interviewee in the room. We catch the candidate up with our plan: other people are going to talk to them about their resume; we want to explore a website together to see a bit more about how they think and what they can find. Elizabeth helps navigate the bug-riddled website. “What are you noticing about that? Talk me through what you’re thinking.” Martin asks the meta-questions. “How much more time would you need to test this? What kind of tools or automation would help make you more productive?”

Interviewers need to treat interviews like exploratory test sessions. They can’t use the same script for every person, just like testers can’t use the same script for every test. Interviewers gather evidence and uncover new information about how the candidate thinks. Candidates stop trying to sell themselves and demonstrate their skills instead. Martin and Elizabeth are there for the triumphs and the struggles. They celebrate when a candidate discovers and reports their first bug. They scream internally when a “Did you see that?” isn’t a heuristic for a candidate to dig deeper. They observe the behaviors, and change their actions as a result. Hopefully the candidate does too. By the end of the session, everyone has a better understanding of whether or not this role is a good fit.

We’re going to foster an environment where everyone can be their genuine, authentic selves. We’re going to have attendees practice how to:

  • Describe a decision or a train of thought
  • Encourage a line of thought without giving too much away
  • Redirect someone onto a different point of focus
  • Reevaluate expectations
  • Deliver critical feedback concisely

Martin Hynie
Martinhynie

With over fifteen years of specialization in software testing and development, Martin Hynie’s attention has gradually focused towards embracing uncertainty, and redefining testing as a critical research activity. The greatest gains in quality can be found when we emphasize communication, team development, business alignment and organizational learning.

A self-confessed conference junkie, Martin travels the world incorporating ideas introduced by various sources of inspiration (including Cynefin, complexity theory, context-driven testing, the Satir Model, Pragmatic Marketing, trading zones, agile principles, and progressive movement training) to help teams iteratively learn, to embrace failures as opportunities and to simply enjoy working together.


Elizabeth Zagroba
Image uploaded from ios

Elizabeth is a senior test engineer supporting a few teams at Medidata. She’s tested web apps, mobile apps, APIs, and content management systems. Elizabeth’s Ministry of Testing career began with a 99 second talk about moonwalking at TestBash NYC in 2015. She followed that up with and article about mind maps for the Dojo, and she came out as an introvert at TestBash Philadelphia in 2016.


Web Application Security: A Hands-on Testing Challenge
Dan Billing

We know that application security is important. We have to protect our customers' data and our employers' data while keeping our systems up and running. But do we have the skills and knowledge to meet that challenge?

During this workshop, we will begin to explore some of the concepts, skills, and techniques of security testing by working with a vulnerable web application. Through practical activities and hands-on learning, we will discover the key security issues that affect web applications today.

Testers will learn skills to identify software vulnerabilities and understand common threats and risks that occur in web-applications. We will also examine some of the tools and utilities that can enhance and extend security testing efforts. Let's look at the essential steps to build and execute your own security testing strategies. Let's examine how learning and mentoring can aid in the development of strategies. You can and should build up your own skills with integrated security testing. This will ensure ongoing relevance of your role in a security context, and the success of your organisations.

Building upon personal experience of integrating security testing into an existing organisation, incorporating DevOps, continuous delivery and integration, this workshop will highlight and discuss the reflections of learning from hackers, recent breaches and the socio-economic, political and technical impact upon software development organisations.

Attendees will take away a set of advice and techniques to incorporate and enable security testing into their day to day work, answering some of the questions that may arise around scope, skills, tools, models and learning.

Technical requirements: This is a practical workshop, so all attendees will require a laptop, and the ability to install and run the application under test, as well as some open source tools that will be useful during the session. Installation instructions and a tool list will be sent before the workshop, and pre-installation is highly recommended for a smooth workshop experience.

Prior experience in security testing web applications is not necessary; however, attendees will need to be comfortable testing web applications and using modern web-browsers (i.e. Firefox, Chrome, Safari).

Takeaways:

  • Understanding of key security risks, threats and vulnerabilities
  • Learn and practice security testing skills in a safe space
  • Development of the security mindset
  • Dan Billing
    Unnamed %281%29

    Dan has been a tester for 16 years, working within a diverse range of development organisations, mostly in the south-west of England. He has been a freelance test consultant but currently works as a Test Jumper at Medidata, where most of his time is spent coaching and leading testers, developing test strategy and exploring the needs of the business. This includes mentoring, supporting and training members of the team to develop their security skills also.

    Dan’s love of testing drives me to become an active member of the testing community, helping to organise local tester meetups in the Bristol and Bath area. He is also a co-facilitator with Weekend Testing Europe and also organises the South West Exploratory Workshop in Testing. He is also a co-host of the podcast Screen Testing, alongside Neil Studd.


    Technical Test Automation Challenges: Patterns and Solutions
    Dorothy Graham & Seretta Gamba

    Laptops or tablets required for this tutorial

    Many organizations find that system level test automation does not work as well as they thought it would - the magic didn’t happen for them. In many cases, these failures are due to generic technical reasons, which can be fixed relatively easily. These test automation patterns are common to automation efforts at any level with whatever tools you are using. We focus on often-neglected technical issues (i.e. not primarily management issues) and the patterns that help solve them. We look at issues such as BRITTLE SCRIPTS, INADEQUATE DOCUMENTATION, and UNFOCUSED AUTOMATION and discuss patterns such as TESTWARE ARCHITECTURE, DOCUMENT THE TESTWARE, AUTOMATE WHAT’S NEEDED, INDEPENDENT TEST CASES and TOOL INDEPENDENCE, as well as other issues and patterns that delegates want to investigate. Learn how to navigate efficiently through the patterns documented on the Test Automation Patterns Wiki, and develop a better understanding of technical test automation challenges and solutions.

    Bring your laptop or internet-enabled tablet to gain access to the wiki during the tutorial.

    The tutorial uses a mix of lecture, exercises and group discussion to explore the wiki and find solutions for common issues. A few selected patterns are covered in depth, and there is time for delegates to address the issues and problems they most want to learn more about.

    Outline of tutorial:

    • Introduction & delegate issues
    • Test Automation Issues and Patterns
    • Using the wiki
    • Patterns covered:
      • TESTWARE ARCHITECTURE
      • DATA-DRIVEN vs KEYWORD-DRIVEN
      • DOCUMENT THE TESTWARE
      • AUTOMATE WHAT’S NEEDED
      • INDEPENDENT TEST CASES
      • TOOL INDEPENDENCE
      • COMPARISON DESIGN
      • EXPECTED FAIL STATUS
    • Exploring of issues and patterns most relevant to delegates

    Dorothy Graham
    2016 dot seretta

    Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation. She is currently working on a wiki on Test Automation Patterns with Seretta Gamba.

    Dot is a popular speaker at international conferences world-wide. She has been on the boards of many conferences and publications in software testing, and was programme chair for EuroSTAR in 1993 (the first) and 2009. She was a founder member of the ISEB Software Testing Board and was a member of the working party that developed the ISTQB Foundation Syllabus. She founded Grove Consultants and provided training and consultancy in software testing for many years, returning to being an independent consultant in 2008.

    She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012.


    Seretta Gamba

    Seretta Gamba has forty years of experience in software development. As test manager at ISS Software GmbH, she was charged in 2001 with implementing test automation. After studying the then current strategies, she developed a kind of keyword-driven testing and a framework to support it. In 2009, the framework was extended to support manual testing. Speaking about this at EuroSTAR, Seretta got the attention of Dorothy Graham who subsequently invited her to contribute a chapter to the book Experiences of Test Automation. After reading the entire book, Seretta noticed recurring patterns in solving automation problems and began to write a book on test automation patterns. She was soon joined by Dorothy and together they developed the Test Automation Patterns wiki.

    Together with Dorothy or alone, Seretta has held tutorials and talks about test automation and especially Test Automation Patterns at major conferences (STAR East & West, EuroSTAR, etc)


    Afternoon Sessions

    Communication and Reporting 101
    Dan Ashby & Mark Winteringham

    As testers it’s our job to not only learn about our products and projects but share that information with others to allow them to make informed decisions. However, communication and reporting techniques are skills that testers often forget to practise and improve. Sometimes a tester needs to ask themselves what’s the best way to communicate with a team, what style of note taking works best for them and how do they report information clearly in a timely manner.

    Dan and Mark’s interactive session offers exercises, examples and discussion points on how to:

    • Describe different forms of communication and why communication is important
    • Discuss the challenges surrounding communication and how to overcome them
    • Apply communication techniques to support your testing
    • Contrast different note taking styles and determine the right one for you
    • Explain your testing activities and what you have learnt during testing

    By the end of the session you will be able to communicate successfully and record/report your testing in a way that is clear, concise and effective for others to act upon.

    Dan Ashby
    Danashby Dan is a SW Tester and he likes Porridge! (and whisky!)
    Mark Winteringham
    Markwinteringham

    I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

    I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


    Building Successful Communities of Practice
    Emily Webber

    Agile working and cross-functional teams have the ability to silo organisations into teams, programmes and functions. This leads to duplication or work, a reduction in sharing knowledge and worse cuts people off from their support network. At a time when organisations are scaling, structures are flattening and workforces are increasingly fluid, supporting and connecting people is more important than ever. This is where communities of practice come in.

    In this workshop, Emily will take you through the value of communities of practice, what needs to be in place and the steps towards creating successful communities. This workshop is for people wanting to start communities of practice as well as those that already have communities and want to increase their value for members and their organisations.

    Emily Webber
    Emilywebber

    Emily Webber has been working with Agile teams and organisations for a number of years. She has a breadth of experience of delivery and agile transformation in both the private and public sectors.

    She was the Head of Agile Delivery at Government Digital Service (GDS), where amongst other things she built, developed and led an amazing team of ~40 Agile delivery professionals. While doing this, she created the model used by many organisations for developing communities of practice. She has taken this model and applied it to organisations including Department for work and Pensions (DWP), Ministry of Justice (MoJ), Department for Environment, Food and Rural Affairs (Defra) and Co-op Digital. As well as capturing it in her book 'Building Successful Communities of Practice'.

    She is always seeking opportunities to give back to the Agile community and co-founded Agile on the Bench; a meet-up in London and a one-day Agile conference and is often found speaking at Agile and Lean conferences and meet-ups.

    She is passionate about teams, communities, organisational learning and skills development, she blogs at emilywebber.co.uk and has a weak spot for vintage scooters.


    The Ultimate Testing Story
    Beren Van Daele & Ard Kramer

    Good stories last: They release emotions, they have their own life, they make you proud and they can help you inform and move others. This is why we embarked on our quest: the quest for the Ultimate Test Story! We set forth to scour the earth in search of testers. Testers with interesting experiences and crazy stories to tell, but not having the words to bring them to life. Our weapon of choice: Test Sphere. A deck of hundred cards that inspires and supports these testers to craft, temper and shape their raw experiences into strong, red-hot stories of power.

    In our workshop we will teach you how to use these cards, by giving you different assignments in small groups. Most importantly: this means you are going to tell test stories. We are convinced that every tester has had interesting experiences, that deserve to be told and shared. After telling each other these stories, you can start to discuss the content of the story and the performance of the storyteller:

    Do you have similar experiences?

    Would you do something differently?

    Do you have a challenging question?

    How can the story be told in a more captivating way?

    After you finished the discussion, you can reward each other by giving a small gift in the form of a sticker to show your appreciation of sharing a story and insights.

    Content is vital, but it is also important to look at your performance: “how to improve your storytelling”. A good story is priceless: It can help you convince your manager or drive home your point in a coaching session. In our workshop it is crucial that you give each other feedback on how to tell a good story.

    Completing the workshop, you will have done many interesting activities: you’ll have heard and told lots of stories and have received feedback on how to narrate a great story. Maybe you were rewarded for your positive participation. Challenge yourself and your test colleagues by telling interesting test stories and who knows; maybe you can help us to finish our quest by performing for us: The Ultimate Test Story.

    Stay awhile and listen. For the best place by the fire, is kept for,... the storyteller.

    Attendees will have:

    • Explained multiple stories about testing and defended decisions under scrutiny of their peers;
    • Taken stories from other peers under rigorous questioning and used that to teach as well as learn;
    • Heard multiple heuristics, techniques, patterns, quality aspects and feelings explained;
    • Acquired feedback on how to tell stories;
    • Used the Test Sphere card deck.

    Beren Van Daele
    Beren

    I’m a software tester from Belgium who shapes teams and testers to improve on their work and their understanding of testing. An organizer of BREWT: the Belgian Peer conference & a testing meetup in his hometown: Ghent and speaker at multiple European conferences. Together with the Ministry of Testing I created TestSphere, a card game that gets people thinking & talking about testing.

    My dream is to become a test coach for people that nobody believes in anymore or no longer believe in themselves. People that have motivation and will, but no luck. I want to tap into that potential and give them the opportunity to become kick-ass testers.


    Ard Kramer
    Bvof alten 2017 ard kramer 1

    I am a software tester from the Netherlands and I am working for Alten Nederland since 2008. I call myself a Qualisopher which stands for someone “who loves truth and wisdom and at the same time is decisive to improve man and his environment” . This means I am interested in the world around us, to see what I can learn and I can apply in software testing. That is one of the reason why I tell stories in books and at (test) conferences such as EuroSTAR, Expo:QA, Belgium Testing Days, CAST and Testnet conferences. My dream is to participate, as a good qualisopher, in all kind of projects such as sports, culture or software testing. Projects which add value to our community: I want to inspire other people by cooperation, fun and empathy and hopefully bring light in someone's life.


    Life's a Riot with Stubs, Fakes, Spies and Mocks
    Christopher Chant & Ash Winter

    Ever wished you had more control over third party API responses? Have you been unable to test specific API responses? Perhaps you’re trying to improve the stability of your automation suites? Have you just started writing unit and integration tests? Maybe you’re building a client for an API that hasn’t been built yet, and you want to get testing earlier? Facing these challenges, mocks, stubs, fakes and spies are essential to testability and can be used both in your automation and as a tool to aid you in your exploratory testing.

    In this workshop, the group will explore mocks, stubs, fakes and spies. You’ll come away with ideas on when these techniques are appropriate, how to gradually build up features in the tools you create to mimic services and you will know just how quick it is to go from idea to working tool. Some programming experience is preferred, but anyone who has an interest in testability will find the workshop rewarding.

    Key takeaways are:

    • Recognise the common terminology used in the stubs, fakes, spies and mocks domain
    • Understand the difference between stubs, fakes, spies and mocks through their characteristics and use cases
    • Apply this foundational knowledge to build a gradually more featured tool to illustrate the journey from stub to fully fledged mock

    The systems we test are massively integrated with many different data sources, and this is only going to increase. With the ability to mimic key services, your dependencies won’t be the bottleneck that stops you from delivering information of value, early and often.

    Christopher Chant
    Christopher chant

    Christopher Chant is a determined and passionate test professional with experience across multiple domains. He has learned to embrace all parts of the development lifecycle as learning opportunities: working in business analysis, development, testing and coaching roles in an attempt to help teams grow and deliver.

    When not testing, Christopher spends his time running (not often enough), traveling all over the country to watch Nottingham Forest F.C. lose (occasionally they win), jealously looking at other people's dogs and playing board games.


    Ash Winter
    Ash Ash Winter is a learning tester, conference speaker, unashamed internal critic, with an eye for an untested assumption or claim. Veteran of various roles encompassing testing, performance engineering and automation. As a team member delivering mobile apps and web services or a leader of teams and change. He helps teams think about testing problems, asking questions and coaching when invited.

    Mobile Security Testing
    Jahmel Harris

    Security testing can seem like a daunting task that is best left to external contractors. In the very best case scenario, we hope that the expensive penetration test does not turn up any security issues as this is likely to delay the release.

    This workshop is aimed at mobile application software testers to help understand exactly what it is a penetration tester will do and why for most vulnerabilities, a security expert is not needed. In this 1/2 day workshop, we will look at how security testing can be carried out on Android and iOS applications oftentimes MORE successfully by testers that know the product inside and out.

    Takeaways: How to perform basic security testing of mobile (Android and iOS) applications using tools and techniques not only used by external penetration testers but by integrating security testing into the in house testing process. Although this course will not be a replacement for a full pen test, by performing security tests while the application is being developed, it is hoped that external consultants (when needed) will require less time on a test and find fewer high risk vulnerabilities.

    Jahmel Harris
    Me crop bw

    Jahmel is a security researcher and hacker. He co-founded Digital Interruption this year; a security testing consultancy which also works with organisations to development tools, techniques and methodologies to integrate security into agile development teams. With a background in not only security testing but software development, Jahmel is able to advise engineers on balancing security with functionality.

    Jahmel has a particular interest in mobile application security, reverse engineering and radio and has presented talks and workshops at home in the UK and abroad.


    [TALK] - I have a secret
    Anusha Nirmalananthan

    This is not a workshop. This is a talk that all attendees of the workshop day will attend in the main auditorium.

    Have you ever needed to share something difficult with a colleague? And agonised about how they would react? Or been shocked by how they did react?

    This talk is the amusing and awkward tale of what it's like to share something deeply important for the first time. How can you prepare beforehand? How you can respond if you are the colleague? What do you do if one of you puts your foot in your mouth?

    Anusha Nirmalananthan
    At work
    Conference
    Friday, 16th March 2018

    The UnExpo
    Richard Bradshaw

    So, you've heard of expos and unconferences, well this is an unexpo! It's an attendee-driven expo.

    Last year we received some feedback from attendees that they would like more to do during the breaks of TestBash. Expos are what some conferences use to fill any break-time gaps, but expos have never fit in with our ‘user-experience’ goals for TestBash. However, we are driven by what our community wants, love experimenting with new ideas, and so The UnExpo idea was born!

    The UnExpo will run during the morning and afternoon breaks and during the lunch break of TestBash. During this time, attendees will be given the chance to set up a "stand" and share their innovations, testing practices, new tools, testing problems, or thoughts on any topic that will initiate interesting discussions. Other attendees, armed with post-its, pens and awesome ideas, will then be able to visit stands of interest to discuss all things testing!

    Benefits of The UnExpo

    • An opportunity for attendees interested in the same subject to engage in conversation. Who knows what that may lead to future… potential collaborations, job offers, or maybe even a new tester friend!

    • Gain impartial and insightful feedback from peers when running a stand. Critique, both positive and negative, can help you improve ideas and solve problems.

    • A quick and accessible way to see and hear all about what the testing community is up to. What tools are others using? What problems are they encountering? What solutions are they creating? All packaged up in one room, ready and waiting for you.

    Richard Bradshaw
    Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

    Why we Should Test Programmable Infrastructure
    Matt Long

    DevOps is taking over the tech world. Now, automated programmable infrastructure is reaching widespread adoption. We’ve also embraced test automation. But there’s a crucial part we’re missing: testing the programmable infrastructure code itself. After all, it’s code too, right?

    Even worse, microservices and other exotic architectures are making deployments ever more complex. The more complex things get, the more bugs we will find. We can’t afford to ignore it for long.

    I worked on a project where we absolutely had to test infrastructure - because our application deployed, configured, and maintained cloud resources. This talk will take you through some of the strategies we developed, some of the problems we encountered, and some thoughts about where the industry might be headed from here.

    Takeaways:

    • Why infrastructure also needs testing
    • Why testers need to care about infrastructure and why it can't just be left to ops
    • What kind of tools are available to test it
    • What approaches you can take to test it
    • What downsides are there to it, and what problems we face now

    Matt Long

    Matt is a senior QA consultant for OpenCredo, a London-based consultancy. OpenCredo specialise in applying emerging open-source technologies to business problems, focusing on a hands-on approach. He is responsible for the testing requirements in a number of OpenCredo engagements, typically creating and deployment of automated testing frameworks, and improving their QA practices. Particular areas of expertise involve cloud infrastructure, serverless, api and web testing.

    Matt works with tools such as Java, Selenium, Cucumber, Mocha and Gatling, across a broad spectrum of languages. He builds and maintains a machine learning foosball bot in his spare time.


    To Boldly Go: Taking the Enterprise on a Journey to Structured Exploratory Testing
    Aaron Hodder

    In this session, Aaron will talk about a recent experience where a test team of business users needed to be coordinated to test a large, complex product in a way that was reportable, legible, and traceable. We didn't want to constrain the business users within the bounds of prescriptive test cases, but we needed to estimate, track, and report on the testing that was done daily.

    Using a combination of kanban, visual test coverage modelling, and managing testing based on sessions, we rose to the challenge and performed testing in a way that was visible and reportable while giving the testers enough freedom to explore and investigate.

    Takeaways:

    • An exposure to a framework for managing exploratory testing using a combination of kanban, SBTM, and Visual Modelling.
    • An understanding into how exploratory testing can be auditable and traceable.

    Aaron Hodder
    Nnoyar9g 400x400

    Aaron Hodder hails from Wellington, New Zealand, where he works for Assurity Consulting to coach testers to develop and deliver new and innovative testing practices to better suit the demands of modern-day software development.

    Aaron is a passionate software tester with a particular enthusiasm for visual test modelling and structured exploratory testing techniques. He regularly blogs and tweets about testing and is a co-founder of Wellington Testing Workshops.


    Communities of Practice, the Missing Piece of Your Agile Organisation
    Emily Webber

    Agile working and cross-functional teams have the ability to silo organisations into teams, programmes and functions. This leads to duplication or work, a reduction in sharing knowledge and worse cuts people off from their support network. At a time when organisations are scaling, structures are flattening and workforces are increasingly fluid, supporting and connecting people is more important than ever. This is where communities of practice come in.

    Communities of practice have many valuable benefits for both individuals and organisations. In this session, Emily will draw from her experiences of developing communities of practice at the Government Digital Service, government departments and other organisations as well as case studies from her ongoing research into this area. To show you why communities of practice are a vital piece of your agile organisation and what role they can play.

    Emily Webber
    Emilywebber

    Emily Webber has been working with Agile teams and organisations for a number of years. She has a breadth of experience of delivery and agile transformation in both the private and public sectors.

    She was the Head of Agile Delivery at Government Digital Service (GDS), where amongst other things she built, developed and led an amazing team of ~40 Agile delivery professionals. While doing this, she created the model used by many organisations for developing communities of practice. She has taken this model and applied it to organisations including Department for work and Pensions (DWP), Ministry of Justice (MoJ), Department for Environment, Food and Rural Affairs (Defra) and Co-op Digital. As well as capturing it in her book 'Building Successful Communities of Practice'.

    She is always seeking opportunities to give back to the Agile community and co-founded Agile on the Bench; a meet-up in London and a one-day Agile conference and is often found speaking at Agile and Lean conferences and meet-ups.

    She is passionate about teams, communities, organisational learning and skills development, she blogs at emilywebber.co.uk and has a weak spot for vintage scooters.


    Experiences in Modern Testing
    Alan Page

    While software testing has always been changing and evolving, the changes many of us are seeing recently go well beyond the scope of changes we’ve seen in the last two (or more) decades. Independent test teams are fading in favour of testers embedded into the development team. Large portions of automation are now owned by developers. Data analysis and monitoring are taking on a prevalent role. Technical skills well beyond writing code are becoming critical knowledge. The scope and breadth of the test role is requiring more and more expertise and depth of knowledge.

    Alan Page has led (and is leading) teams through transformation to modern, advanced testing. In a keynote filled with experiences, anecdotes, and practical examples – as well as warnings about potential traps, he shares everything he knows (or at least everything he can fit into this session) about modern testing in 2018 and what it means to every software tester.

    Takeaways:

  • Knowledge of technical skills beyond automation valuable to testers
  • Ideas and examples of "new" tasks and approaches that testers can use to provide value to their team
  • Examples of how to lead changes in testing on your own teams
  • Alan Page
    Headshot

    Alan Page has been a software tester for over 25 years, and is currently the Director of Quality for Services at Unity Technologies. Previous to Unity, Alan spent 22 years at Microsoft working on projects spanning the company - including a two year position as Microsoft's Director of Test Excellence.

    Alan was the lead author of the book "How We Test Software at Microsoft", contributed chapters for "Beautiful Testing", and "Experiences of Test Automation: Case Studies of Software Test Automation". His latest ebook (which may or may not be updated soon) is a collection of essays on test automation called "The A Word: Under the Covers of Test Automation", and is available on leanpub.

    Alan also writes on his blog (http://angryweasel.com/blog), podcasts (http://angryweasel.com/ABTesting), and shares shorter thoughts on twitter (@alanpage).


    Less is More
    Elizabeth Zagroba & Diana Wendruff

    A colleague attempts to compliment you, but insults you in the process. You ask someone a yes or no question, and they explain the system to you for 45 minutes. You open a ticket to find a pasted email chain, unedited. At times we all can make conversations too lengthy, or without adequate forethought. The result can be that we may come across as fake or insincere. Why does any of this need to be communicated?

    In this talk, we want to impart that experience onto others. We’ll present techniques for how people can quickly come to a common understanding. We’ll share examples of how a short interaction doesn’t have to be superficial. You’ll find out what you can you say to a confidant that you might not want to say to an acquaintance. We’ll engage you in learning about self-awareness, social norms, and judgement. You’ll know how to be succinct quickly so less can be more.

    You’ll understand:

    • Which conversations to have
    • When to be a listener
    • When to think things through

    Elizabeth Zagroba
    Image uploaded from ios

    Elizabeth is a senior test engineer supporting a few teams at Medidata. She’s tested web apps, mobile apps, APIs, and content management systems. Elizabeth’s Ministry of Testing career began with a 99 second talk about moonwalking at TestBash NYC in 2015. She followed that up with and article about mind maps for the Dojo, and she came out as an introvert at TestBash Philadelphia in 2016.


    Diana Wendruff
    Diana

    Diana is the lead tester at Loop Returns. She’s tested the complex process of authoring admin portals, 3rd party integrations, and how websites work for the end user. In her free time she likes to bike, hike, and explore in nature.


    Universities: What Do the Academics Have to Say About Testing, and Why Should We Care?
    Geoff Loken

    Testing happens in computer labs, and at desks, and on mobile phones, and anywhere else with computers. Testing, like the broader field of software development, is a discipline grown by practitioners. It isn't the only place ideas develop. The lessons we learn testing filter their way into training courses and academic instruction, and likewise, researchers use those courses to pass ideas down to us, on the field.

    This talk will discuss the current state of academia. How does it handle testing: what's missing, what's good, and what's bad. It will ask, "should we care," and talk about how we can make things better.

    Takeaways: Participants will leave with a better understanding of what universities have to say about testing, and for many, a better understanding of how academic research works. They'll be thinking about how their ideas spread, and how we can contribute to a smarter world for testing.

    Geoff Loken
    Geoff loken head

    Geoff Loken has been in QA (or testing, if you prefer) for approximately ten years, and is currently serving as Quality Assurance Coordinator at Athabasca University. He holds a Master of Arts degree in History from the University of Regina, and is in the final stages of a Master of Science in Computing and Information Systems from Athabasca University. He lives happily in northern Alberta, but escapes once or twice a year to travel or talk at conferences. Find him on twitter @geoffloken


    Learning to Learn - My Struggles and Successes!
    Danny Dainton

    Learning something new is hard! Learning which new testing related subject to focus on is super difficult!

    I've never been a natural learner, I have a very poor educational background on paper but my current thirst for knowledge proves that anyone can "learn to learn". I'm a relatively new tester and beginning any new career means that the learning curve is going to be extremely steep!

    In this talk, I would like to take you on a journey through my own experiences and walk you through some of the failures and successes, that I've had on my path, to become the tester I am today and my own vision for where I'd like to see myself.

    Takeaways: I want people to know that failure is not a bad thing, it's how we learn and grow as people and as Testers. Learning is a very personal thing, there are a number of different methods that I use or have used in the past, in order to find the right fit for me.

    I want people to be more vocal about the things that they learn and not to be afraid to share these experiences with others - there will always be someone that finds value in what you share.

    Above all, I want people to have fun and don't get bogged down with the pressure of trying to learn everything there is to learn about anything, in one day. We need to take a step back sometimes and celebrate the things that we have learnt so far.

    Danny Dainton
    Danny dainton Danny is a former British Army Infantry Soldier who has served in places such as Iraq and Afghanistan but now finds himself completely in love with the Software Testing craft. He's relatively new to Testing but enjoys the challenge of learning new things every day! He's currently a Senior Tester at the awesome NewVoiceMedia based down in Basingstoke but is very much a remote worker living in Bristol. You can reach him on Twitter @dannydainton and he also blogs at dannydainton.com

    Discovering Logic in Testing
    Rosie Hamilton

    We all test in different ways and sometimes it can be hard to explain the thought processes behind how we test. What leads us into trying certain things and how do we draw conclusions, surely there is more going on here than intuition and luck? After working in games testing for almost a decade, I will draw from my personal experience to explain how games testers develop advanced logical reasoning skills. Using practical examples that will make you think, I will demonstrate logical patterns, rules and concepts that can help all of us gain a deeper understanding of what is actually happening in our minds when we test.

    Takeaways: See how testing looks and feels from the perspective of a games tester hear about some of the challenges games testers face.

    Learn about the differences between Deductive, Inductive and Abductive reasoning along with the theory of Falsificationism.

    Identify some of the biases we encounter when using personal observations and how logical reasoning can be applied when testing.

    Rosie Hamilton
    Rosiehamilton Hello I'm Rosie. I've been testing since 2005 and survived 9 years in the UK Games Industry working at places like Xbox Cert, 2K Games & Blizzard Entertainment. In 2014 I left games testing behind and stepped into the world of testing business software. Around this time I started writing a testing blog called Mega Ultra Super Happy Software Testing Fun Time to keep track of all the new things I was learning. I currently work as a Senior Test Engineer at Scott Logic in Newcastle where I test financial software used by banks. I've spoken at and been involved with Newcastle Upon Tyne Agile Testing, Agile North East, South West Test and Leeds Testing Atelier. I am also a host for #TuesdayNightTesting gatherings. My hobbies include violin and yoga.

    Part of the Pipeline
    Ash Winter

    Nobody /really/ likes change, its human nature. Testers have a special relationship with changing tools and techniques, they change and we tend to flounder a little and end up very nervous about our place in the new world. Continuous delivery is one such circumstance, I see and speak to many testers really struggling. However, with a significant shift in outlook and a chunk of personal development, testers can excel in environments such as these. It’s time to start to get out in front of a changing world, rather than always battling to catch up.

    I want to share my experience of adding value as a tester in a continuous delivery environment, what new technologies and techniques I've learned, using your Production environment as an oracle, advocating testability and most crucially, not overestimating what our testing can achieve. Testing is not the only form of feedback, it’s time to let go of some the aspects of testing we cling to.

    Continuous delivery adds richness and variety to our role as testers. To me, it is a facilitator for the autonomy and respect that testers have craved for a long time, so let’s get involved...

    Takeaways:

    • My learning and experience of working in a low batch size, high flow environment and the effects that it had on me as a tester.
    • Aspects of testing which hinder ones effectiveness in this environment and how to let go of them.
    • A few practical ways to add value, based on skills that I gathered along the way...

    Ash Winter
    Ash Ash Winter is a learning tester, conference speaker, unashamed internal critic, with an eye for an untested assumption or claim. Veteran of various roles encompassing testing, performance engineering and automation. As a team member delivering mobile apps and web services or a leader of teams and change. He helps teams think about testing problems, asking questions and coaching when invited.