TestBash Brighton 2019

Testbash brighton 2019 adverts dojo event banner
April 3rd 2019 - April 5th 2019

The most community focused and super learning focused software testing conference is back in 2019! This will be the eighth TestBash Brighton. We're super excited about it!

We’re excited about the strong connection and growth of the MoT Community and expanding our offerings. We have TestBash Essentials, TestBash Workshops and TestBash conference tickets to choose from. Plus, we are now offering childcare for the 3 days. Have a peek to see what interests you and your team!

This year it's a 3-day event with an additional pre-event course.

For the pre-event course, we will be hosting the 3-day ‘Automation in Testing’ course with Richard Bradshaw and Mark Winteringham.

Day 1 is a single-track conference, TestBash Essentials, focused on bringing testing knowledge and experience to those that would benefit from it, but may not necessarily be an expert in it (yet).

Day 2 is a workshop day, with 10 half-day workshops and a 2 talk tracks.

Day 3 is a single-track conference, TestBash, consisting of 9 talks.

On Saturday, we will be hosting an Open Space event.

On all the days, you can expect a wonderful community to come together in a friendly, professional and safe environment. We think you will feel at home when you arrive!

Training
Monday, 1st April 2019:

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Mark Winteringham
Markwinteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Workshops
Thursday, 4th April 2019:
Morning Sessions

Join us as we explore the heart of testing.  In this workshop (based on the BBST Foundations course), we will tour the foundational concepts and challenges of software testing.

Participants will:

  • Explore the diversity of testing and contexts in which testing is performed.
  • Learn to evaluate the context and determine information objectives in developing test strategies.
  • Practice applying heuristics for determining if the software is working.
  • Understand the impossibility of complete testing.

Takeaways

This workshop will equip you to better answer the following questions:

  • What is software testing?
  • Why am I testing?
  • Is there a problem here?
  • Are we done yet?
Ben Simo
Bensimo 002 Ben Simo, aka QualityFrog, is an amphibious time-traveling context-driven cyborg software investigator. In his nearly 30 years as a professional software tester, Ben has seen technologies and techniques come and go; while one thing remains the same: software is built by, used by, and impacts people. Ben approaches software testing as observational and experimental investigation that enables people to make better decisions that result in better software. Ben currently helps teams build better software at Medidata Solutions. Ben shares wild-caught software problems at IsThereAProblemHere.com.
Erik Davis
Img 20180924 121214 Erik has over 16 years of experience in and around software testing. He been a tester, team lead, manager, and senior manager of testers. Erik built and ran an automation team, led a test education team, and has helped multiple companies find, recruit, and develop testers. Erik currently works as a Lead Test Engineer at OnShift, helping to build and grow their testing practice and team.

We will start with an overview of why you need to randomize your test data and testing practices. This will be accompanied by real-world examples of test seed data, generated test data and front-end tests. After proving how poorly static tests are at actually finding problems, we will move onto showing some simple techniques for creating slightly more robust tests with ‘random’ data sets. These will be examples that will be worked on by the group, language and skill agnostic as they are really logic tests. How these data sets are used and generated will be covered, as well as what they lack. The next step will be a more advanced look into actual user-generated data and front-end behavior. This leads to the next activity, taking your previous work and making it ‘smarter’ to simulate real users. The talk will conclude with a summary on how to take this information and use it in practical situations as well as when less-complicated ‘dumb’ data is acceptable.

Takeaways

  • How to think more like an end-user.
  • Creating random data is hard, but creating user-like data is easy.
  • Writing your own test data framework is not hard, and makes data generation easier.
  • Preventing errors from unusual customer data is more important than simply testing the same 'random' data over and over.
Mike Roznik
20150906 231535 Mike started his career as a developer, spending over 15 years working in multiple languages and at various points as a front end dev, back end dev, security consultant, DBA, and even a run as a UI/UX designer. At some point he became the 'testing developer' and that started him down the path that eventually led to being a full-time tester.

We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with. However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant by this sentiment. And even those who write production code find that there are unique design patterns and code smells that are specific to test code in which they are not aware. In this workshop, you will be given a smelly test automation code base which is littered with several bad coding practices. Together, we will walk through each of the smells and discuss why it is considered a violation and then refactor the code to implement a cleaner approach. In particular, we will investigate the following smells: Long Class Long Method Shotgun Surgery Duplication Indecent Exposure Inefficient Waits Flaky Locator Strategies Multiple Points of Failure

Takeaways

By the end of the workshop, you’ll be able to:

  • Identify code smells within test code
  • Understand the reasons why an approach is considered problematic
  • Implement clean coding practices within test automation
Angie Jones
Angie jones headshot 2 Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

Leading technical teams delivering software brings a unique set of challenges beyond normal teams. How do you learn to communicate with people on "the business side" of your organization? How do you negotiate time and budget for critical things like testing and infrastructure? And how do you lead people who are expert in technical areas you have no experience in?

Join Jim Holmes in this highly interactive workshop where you'll learn some fundamental skills and gather some tools that can help you on your leadership journey. You'll clarify what makes an effective leader, learn a few critical communication skills, and get tips on dealing with difficult people and situations. You'll learn how to better understand and embrace business value over shiny new technical toys.

You'll leave this workshop with a better understanding of leveraging your strengths and mitigating your weaknesses. You'll also take away approaches for ensuring you're able to best empower your teams to do amazing things.

Takeaways

Practical, experience-based learnings on

  • Whether or not you want to become a leader
  • Identifying your strengths and weaknesses
  • Learning what you need to learn about your own leadership journey
  • Learning how to manage expectations up and down
  • Learning how to communicate with the business
Jim Holmes
Jiminbulgaria 1200x800 Jim is the owner/principal of Guidepost Systems which lets him engage directly with struggling organizations. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from start ups to Fortune 10 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

Context matters.

This is not a new concept in the world of testing, but as our craft gets more and more refined, the expectations from new testers increase dramatically. Doubly so for new test managers. Someone entering the field of software testing in 2018 faces the daunting challenge of absorbing the last few decades of continuous learning, reading, writing, sharing and experimenting that the community at large has contributed.

As coaches and leaders, we would love for our teams to be able to jump right in, but the reality is that we need to consider each individual team member's current starting point. The journey is not shared. To be honest, the paths are not even similar. So how to we establish an approach for coaching entire teams that take so many individual starting points into consideration?

In this experiential workshop, we will explore methods for coaching and teaching testing concepts in a manner that is inclusive to individual needs, and challenging to a wide array of skills, interests and opinions. 

We will explore:

  • Experimentation over theory
  • Interpretive exploration
  • Safe ritualizing challenging of peers
  • Amplification of diversity
  • Iterative "Real Work" swarming

Together, let's learn how to be better coaches, how to be more supportive and how to encourage the brilliant testers you have. Let's bring "context driven" into our team coaching.

Takeaways

Context sensitive coaching methods for testing concepts

Safe methods to interpret starting points for individual team members

Iterative approaches over "surefire" methods

Chris Blain
Cblain Chris Blain has twenty-two years of experience working in software development, on projects ranging from embedded systems to cloud applications. He offers students and companies a powerful mix of real-world experience and the latest research knowledge. He has worked as an engineering manager, test manager, developer, coach, and test architect in a wide variety of domains, including developer tools, security, test and measurement, regulated health care, real-time systems, and large-scale distributed systems. Chris is a former board member of the Pacific Northwest Software Quality Conference and a regular conference speaker and instructor. His main interests are testing, distributed systems, programming languages, and helping teams deliver higher-quality software.
Martin Hynie
Martinhynie

With over fifteen years of specialization in software testing and development, Martin Hynie’s attention has gradually focused towards embracing uncertainty, and redefining testing as a critical research activity. The greatest gains in quality can be found when we emphasize communication, team development, business alignment and organizational learning.

A self-confessed conference junkie, Martin travels the world incorporating ideas introduced by various sources of inspiration (including Cynefin, complexity theory, context-driven testing, the Satir Model, Pragmatic Marketing, trading zones, agile principles, and progressive movement training) to help teams iteratively learn, to embrace failures as opportunities and to simply enjoy working together.


Our talk tracks are an alternative to our workshops. The morning talk tracks are focused on DevOps. The talks will be spaced out throughout the morning with breaks.

On the Path to CI/CD with Michele Campbell

Trying to get to continuous deployment is probably one of the largest challenges an organization can face. So how did our 200 person engineering organization manage to double our production releases with minimal engineering effort in just two months? By simply taking it one step at a time. We faced some monumental hurdles during this time. The biggest one we needed to overcome: how do we essentially double our workload without overwhelming teams and, in turn, causing the quality of the program to suffer? I will discuss the solutions we developed for that hurdle and the many others that came along. When you leave this session, you will have the tools needed to start your organization down the path of CI/CD.

Continuous Performance Testing Through the Users’ Eyes with João Rosa Proença

One day, part of my team went to visit one of our customers and returned a bit concerned. They went to learn more about how users interact with our system, but also noticed some performance improvements we could work on.

This specific customer used our product intensively, producing not only large amounts of data but also complex relationships between records in the data-model. The team quickly addressed these never-seen-before scenarios and greatly improved performance. But how were we going to make sure that performance would not degrade over time for our customers, as we developed more features?

At first this seemed like a common scenario, but we found out that performance testing tends to focus on simulating lots of users rather than data-intensive scenarios with few users. Also, a lot of the tools that are commonly used focus on requests to the server, but part of the performance issues we had were happening on the browser-side.

In this talk I will present how we built a performance testing solution integrated with our continuous integration system. I’ll be covering the tools we chose for the task, including one we built ourselves, leveraging standard APIs in modern browsers to obtain metrics that are truthful to what the user experiences. You’ll see what we learned along the way, what worked best for us and also what didn’t.

How to Plan and Define Your Continuous Deployment Pipeline with Patxi Gortázar

Planning and defining a complete continuous integration/deployment (CI/CD) process for an application is not easy if one has no previous experience. In the session, I will explain the basics: from code to deployment, stage by stage, and how we can assess each of the stages and automate them in our CI/CD pipeline.  

The session will start by presenting the basics of the life-cycle of a project, with a focus on development and testing with no CI/CD process involved. Then, we will discuss how and when we could be interested in running automations like compilation, testing, packaging, and distribution of assets from our project. We will see real examples in Jenkins, the popular CI/CD server, with reference to other CI systems, on premises or as a service (like TravisCI).

We will then discuss about testing environments, when to deploy to a testing environment and the needs (packaging the application, publishin in package repositories, deploying). In connection to this we will discuss when to do testing: online testing vs nightly testing. Pros and cons of both and when it could be interesting to choose one over another.

Finally, options for deployment to production will be presented: either with some additional QA process involved in the middle, or directly publishing our product once all automated tests have passed. 

At the end, a complete pipeline, from commit to the code repository to deployment to production will be presented.

Michele Campbell
Michele campbell color Michele is a QA Manager and Release Coordinator at Lucid Software in Utah, where she has been working on improving the testing process for four years. She speaks Japanese and loves visiting the country for vacations. In her free time, she enjoys playing with her pet guinea pig, baking just about anything, and going to live theater.
Patxi Gortázar
Patxi4 small Dr. Francisco Gortázar (Patxi) is a Tenure Professor at Rey Juan Carlos University in Madrid with more than 12 years of experience in teaching distributed systems, software architectures, and continuous integration. He has published more than 20 papers on high impact journals and conferences. He has developed a strong connection with the industry, working as a consultant for several Spanish companies on topics around cloud technologies and continuous integration and deployment. Since 2014 he is in charge of the CI infrastructure for Kurento and OpenVidu, two open source projects for effortlessly building real time communication applications, where more than 100 end-to-end tests are executed daily at different stages of the build & release pipeline. Currently he is coordinating the H2020 project ElasTest, where he researches novel ways of testing cloud infrastructures and applications, including 5G, IoT and real-time video systems.
João Rosa Proença
Joaoproenca João Proença comes from Lisbon, Portugal, and is Quality Owner in R&D for OutSystems, a company that provides one of the leading low-code development platforms in the world. He has assumed various roles throughout his career in the past 11 years, including quality assurance, development, customer support and marketing. Finding innovative solutions for difficult problems is what drives him the most, so he is always eager to talk about how professionals are overcoming testing challenges around the world. Outside of IT, João is passionate about songwriting, movies and football. You’ll see him tweet about all of these topics using the @jrosaproenca handle.
Afternoon Sessions

Introduction to the problem or opportunity

Throughout our careers we have seen a lot of test automation and performance testing created but never executed as part of a pipeline. Often these assets would be run locally and after a while, those tests become stale as they are not visible to the whole team, sometimes even regressing back to manual regression testing cycles over time. Often there is an existing pipeline that may run build and unit tests, but not include the other forms of testing that are needed to meet the level of quality that the organisation values for that application.

We would like explore the value of building and tailoring pipelines to include many forms of testing. The skills of advocacy, test strategy, analysis and critical thinking are equally as important as technically focused skills such as source control in this area.

Why it is important for testers

We believe that testers will benefit from this set of skills by:

  • Sharing tooling between developers, testers and operational focused team members can foster collaboration and empathy, leading to teams forming and aiming towards common goals.
  • Testers can leverage their analytical skills to tailor a pipeline relevant to their organisations needs. Pipelines can be targeted at what the organisation values, running resilience tests if that is the focus early if uptime is valued or performance tests if speed is the focus. 
  • As DevOps principles and practices spread to many organisations, testers will need to find alternative ways to add value to their teams. Contributions to quality such as being able to deploy your application in a safe manner are increasingly important as customer demand and application complexity increase.
  • Test environments are often appropriate for different forms of testing, end to end testing in integrated environments or early exploratory testing in development environments can be added to a pipeline.
  • Represently your existing test strategy within a new or existing pipeline, helping their team to achieve a balanced test approach.

Having many layers of test automation only realises its true value when executed as part of a deployment pipeline. If testers are to make a real impact as they switch to automation focused roles, the main differentiator will be the ability to run those tests as part of deploying applications in safe and stable manner. With the goal of making your release process a business advantage instead of a limitation.

Ash Winter
Ash profile pic Ash Winter is a consulting tester and conference speaker, working as an independent consultant providing testing, performance engineering, and automation of both build and test. He has been a team member delivering mobile apps and web services for start ups and a leader of teams and change for testing consultancies and their clients. He spends most of his time helping teams think about testing problems, asking questions and coaching when invited.
Suman Bala
Suman

Suman Bala is Test Lead at Sky. She strives to mature testing practice as a key enabler to drive business change benefits and is a strongly believer in Test Automation. In the past 12 years, she has designed & developed automation framework from scratch for various products from middleware graphics library, e-commerce & mobile apps. She is quality evangelist who is passionate about providing continuous value adds through leadership, problem solving and encouraging efficiency. She feels proud on how people’s prospective have changed regarding testing throughout her career.


The notion of Context Driven Testing has spawned conversations about how much context is valuable... but in our roles as software testers, our focus understandably tends to be mostly about context in testing. We very rarely extend our thinking about context beyond our test strategy. Have we ever sat back and thought, how much of this context is about me?

About how I personally see the world and how the world views me? About what my interactions with technology contribute to the quality of the product? About where I find myself and how I belong in this equation of building software?

A crazy thought, perhaps the context that defines me calculates into the overall outlook as is described by the desired outcome of the product.

In this workshop, we will explore the context of the individual as it pertains to the overall quality of the product. A human variance as a tool and mechanism to change the measure and influence of quality for a more diverse, scalable and sustainable output.

Ash Coleman
Ashcoleman

Ash, a former chef, put recipes aside when she began her career in software development, falling back on her skills in engineering she acquired as a kid building computers with her brother. A progressive type, Ash has focused her efforts within technology on bringing awareness to the inclusion of women and people of colour, especially in the Context-Driven Testing and Agile communities. An avid fan of matching business needs with technological solutions, you can find her doing her best work on whichever coast is the sunniest. Having helped teams build out testing practices, formulate Agile processes and redefine culture, she now works as an Engineering Manager in Quality for Credit Karma and continues consulting based out of San Francisco.


How do we ensure the feedback we are getting from our automated tests is targeted and informative? What information are we throwing away in an attempt to check for specific data?

How we design assertions and the tools we use to determine the value of our automated tests. However, most of the time we neglect our assertions, relying on libraries such as Hamcrest, Chai and Assert. Enter approval testing, a different approach to assertions that can improve tests feedback loops. By increasing the scope of what is being asserted without sacrificing speed, reliability and maintenance, approval testing can help superpower your automated tests feedback.

In this practical workshop, attendees will learn the how and why of approval testing techniques by creating automated tests using approval testing against different application layers.

Takeaways

By the end of this workshop students will be able to:

  • Discuss the goals of automated regression testing and feedback loops
  • Describe how approval testing works and differs from traditional asserting
  • Construct approval tests for different interfaces ranging from API to Visual
  • Construct methods to ignore specific data during approval testing
Mark Winteringham
Markwinteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


As the code changes, the tests change too. This session is about the actual work we call “maintaining the tests”.

We will discuss test relevancy and value, as the requirements change and when to (heaven forbid) abandon tests. We’ll see cases where we need to change the level of existing tests (unit, API, UI or any other type) as we add and change functionality, and replace them with the appropriate level. We’ll see how to approach it from either test-first (BDD or TDD) or test-after. We’ll refactor the tests to make them generic, as the code becomes more generic, and change the language they describe the examples.

As we go we’ll touch on what makes them “maintainable”.

This session is interactive, as I’ll explain the code and walk through the changes, as suggested by the audience. The code will also be available to the attendees to work on their laptops as we go.

Takeaways

  • Evaluate existing tests
  • How to replace tests from different layers
  • When to throw away tests
Gil Zilberfeld
Gilzilberfeld

Gil Zilberfeld has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. With more than twenty years of developing commercial software, he has vast experience in software methodology and practices.

Gil has been applying agile principles for product development more than a decade. From automated testing to exploratory testing, design practices to team collaboration, scrum to kanban, traditional product management to lean startup – he’s done it all. He is still learning from his successes and failures.

Gil speaks frequently in international conferences about unit testing, TDD, agile practices and product management. He is the author of "Everyday Unit Testing", blogs at http://www.gilzilberfeld.com, co-organizer of the Agile Practitioners conference and in his spare time he shoots zombies, for fun.


Our talk tracks are an alternative to our workshops. The morning talk tracks are focused on Personal Stories. The talks will be spaced out throughout the afternoon with breaks.

What Can I Learn from Autism as a Tester with Matthew Parker

We learn, love and laugh everyday, but some experiences in life teach us lessons and help us understand more than others. I’ve always had a love of testing, and I’ve always felt like being a ‘good’ dad was something to be proud of. I never expected to become a dad of a child with Autism, and I never expected to learn so much from being a dad to an autistic child that would help me develop as a tester.

Addressing the imposter syndrome I felt as a proud dad suddenly trying to understand autism, and seeing other parents apparently flourishing, allowed me to understand the feelings I came across as a test manager discovering the world of agile testing and automation.

The need for a communication approach to be appropriate to the person being addressed and to the situation you are in, has driven me to better understand communication techniques. Seeing the risks presented with a child with autism due to a lack of risk awareness in the child allowed me to understand how dangerous ostrich syndrome can be in addressing risks – they don’t go away just because you don’t address them.

Talking to other parents of autism and awareness of the spectrum of autism became critical in my understanding of context. It was important to listen to advice and understand how others had addressed challenges but there is no one size fits all and no perfect solution. It has been finding ways to engage and help Tom develop (where instruction has little value) that lead me to really embrace learning and development techniques, particularly coaching, support and guidance by example.

The underlying principle is that learning and development is a constant. Yes, we can and should look to identify opportunities to develop and learn within a professional capacity. However, we shouldn’t lose sight of the lessons we are learning every day in the lives we’re living. Simple tasks like writing a shopping list can teach us about tooling; maybe you make use of tools like Alexa to create your list. Sometimes there are more fundamental things that happen in life that we learn from, like having a child with autism or dealing with our personal health and wellbeing. What is important is that where the opportunity presents itself, we do learn and we apply these lessons wherever we can, including improving our testing craft.

Don't be a Superhero with Ali Hill

Superheroes are all around us. These are the people managers come running to when a critical bug appears in production. Or the people who sacrifice their evenings, weekends and vacation time for their employer.

I used to be one of these superheroes. I used to love being called upon when a critical bug appeared in production. It was great to feel appreciated. As software testers, we don’t experience this feeling of appreciation often enough and for me, it was addictive.

As I got involved in more projects, this superhero status became a problem. I wanted to experience this feeling of appreciation again when things went wrong so I was reluctant to share knowledge with team members. I would often be called upon when an issue arose.

At first, this felt great. When I was answering calls outside work, the praise kept coming. Soon enough though, this became what was expected of me. I was no longer a superhero in the eyes of management, but I was still putting in the excess hours.

Something had to change before I burned myself out completely.

In this talk I want to talk about how I developed a strategy to become the most effective employee I could be, but within contracted working hours. I learned to share knowledge, challenge excess work that landed on my desk and ensure that all the work I was doing was completed within my 9-5.

I want to share how you can be passionate about your job but also have a very healthy work-life balance.

Be Excellent to Each Other with Christopher Chant

1 in 4 people experience a mental health problem every year. In England, 1 in 6 people report experiencing a common mental health problem (such as anxiety and depression) in any given week* 
We spend most of our waking hours with work colleagues, and the stats suggest at some point, you or a member of your team will go through a mental health problem. I've been on both sides of the statistics having lived with depression for most of my life and have worked with and helped colleagues with anxiety, depression. 
I want to share what I've learned along the way and what we can do to look after ourselves and our team. Through my story I'll explore the signs to look out for, when is it appropriate to get involved, how I tell if someone is just down or if someone needs help and how the right support can lead to dramatic, positive developments
*Stats from mind.org.uk

Mental Health as a Tester with David Williams

In early 2003 I started my career as a gifted, but cocky young tester with steady career progression and a love of learning. I was confident with the skills I had developed but my “I know all the things!” attitude quickly disappeared as I started to realise how broad a tester's skill set could be, and how much I could still improve.

I attended the “Rapid Software Testing” course, taught by James Bach, and read all the books and blogs I could find, improving my testing skills further still. The sky was the limit.

Then, 6 years later in early 2009 everything stalled and went into year after year of flatline, culminating in my career, family, passion and confidence as a tester being decimated.

Coming to a crashing halt 5 years later, I went deep into depression and was affected by a stress-related autoimmune issue, requiring a physical operation to correct.

From that point I simply couldn't see a way to get back into the driving seat, and decided I would quit my career for good. I resigned from my role with no job to go to, and no plans to continue my testing career, or in fact any type of technology role. I was burned-out.

This story is about the stages of my testing career, issues I encountered in those stages, and how others can learn from the mistakes I've made, so that they can avoid having to go through them in the first place.

I was fortunate to have the support of my amazing family and friends to help me out of that terrible situation and persuade me to take one last chance on my testing career.

Through hard work and encouragement from the testing community, I managed to claw my way back from that precipice and I now feel like my passion, my drive, and my career are at an all time high. I’m hopeful that my story will encourage others to look after their mental wellbeing, and to seek help when needed in order to be the best version of themselves.

That's Just The Way You Make Me Feel with Gem Hill

Emotions are hard and messy, and are a part of every day life. I believe that we need to face, pull apart, and accept our emotions in order to work well together.

Tech is full of feelings:

  • Developers put their emotions into their code
  • Testers put their emotions into their bugs
  • Relationship management is a huge part of working in a team

We are sometimes unaware of where emotions are in a team, and how we can help people understand them, and decouple emotions from work where it's unhelpful.

This talk moves from code and bugs (and how testing work outside of bug-finding is often devalued), through to relationship management, accepting some negativity, and how to look after yourself. I'll talk about feelings at each step of the way (whether they are good or bad) and how to recognise these.

I end with a quick look at self-care, and accepting that there will be some things you cannot change.

I am not a therapist. I can't help you specifically with emotions, but I can provide some insights into my experience and thoughts about the messy mix of emotions and tech and how to be aware of how you're feeling.

Christopher Chant
Christopher chant

Christopher Chant is a determined and passionate test professional with experience across multiple domains. He has learned to embrace all parts of the development lifecycle as learning opportunities: working in business analysis, development, testing and coaching roles in an attempt to help teams grow and deliver.

When not testing, Christopher spends his time running (not often enough), traveling all over the country to watch Nottingham Forest F.C. lose (occasionally they win), jealously looking at other people's dogs and playing board games.


Gem Hill
Xr3kqjjw 400x400

Gem is a web tester, podcaster, and all round geek, living and working in Manchester.

She works at a digital agency, mostly testing Drupal sites (though she occasionally dabbles in Magento).

She has two podcasts, one about testing (Let's Talk About Tests, Baby), and one about mental health (Inner Pod). When she's not doing that, she's baking, reading comics, or going to the cinema.


Ali Hill
Alihilljpg Ali began his software testing career testing video games before moving into an Agile testing role. He now works as a QA and Continuous Delivery Consultant at ECS Digital. He has a passion for learning. Recently Ali has been interested in learning how software testing can flourish in the world of DevOps and Continuous Delivery and has been sharing his testing knowledge within his cross-functional team. Ali can be found talking about testing on Twitter, blogging or at the Edinburgh Ministry of Testing Meetup.
David Williams
38bb1281 9933 4557 ad67 ab4a36fc08b5

David is a 40 year old father of 2, living in London, United Kingdom.

Across his 15 years in the field of quality and testing he's been a tester, senior, lead, test manager, head of testing, and is now helping eBay UK to transform into a truly Agile group. He has a technical mindset and a real passion for investigation and testing as a profession (especially risk-based, and Exploratory Testing), and his passion is infectious when talking about his experiences.

He's a prolific Twitter and Slack user, plus testing community advocate and is in the process of resurrecting his blog, having taken a temporary step back over the last few months due to work commitments. We've seen him speak at several TestBash 99 second talks. David also mentors at the London Software Testing Clinic whenever possible.


Matthew Parker
A4cc62ee abf7 478a acd2 677c661028c0 I've been in testing for 15 years now. My passion for testing comes second only to my family. I am passionate about continual improvement and challenging myself. My early career was very much driven by learning from books and colleagues as I was thrown into the deep end as QA Manager (as the only developer showing any interest in making sure any testing was done and no test department). After a number of years in test and test management positions I realised the part of the job I enjoyed the most was supporting the teams growth and development. I’ve recently taken the step into contracting and consultancy.

Security is hard and dangerous, and you need a black hoodie to even qualify as a hacker... NO! Dead wrong!

Security is not something special, separated from other testing. In reality an ordinary exploratory tester can do the bulk of the security work on a product. No haXX0r skillz required!

This workshop is about the hacker mindset and the security domain knowledge. 

In the end I will show you some nice tools for automated vulnerability scanning, some checklists and what you can do yourself to easily heighten the confidence in your environment.

This workshop is for you who works in IT or as a tester, but feel like a security n00b (hint: you're not!).

Takeaways

  • The bulk of any security work is really just to have confidence in what your system actually is
  • The Hacker Mindset is closely related to exploratory testing. Instead of verifying function, we want to know that it doesn't work any _other_ way that expected
  • There are easy to use tools to check your security configurations, and most tools you already use are very useful
  • You will learn keywords in security in order to sound like a pro when you talk to your team :)
Emma Lilliestam
Testbashemma Emma Lilliestam is a test consultant at House of Test in Sweden. She is passionate about security, DevOps and biohacking, aswell as the ethical implications of software. She is a chip-implanted cyborg and spends her spare time soldering hardware into blinking wearables. Once upon a time she was a journalist, and she believes that a big part of software testing is about asking people the right questions.
Saturday, 6th April 2019:
Allday Sessions

We’re seeing it as an initiative to get people talking more, and perhaps go a bit deeper on some topics. Those topics could be anything, even what you may have heard at the conference. By deeper, we mean many things, such as discussions and debates. Plus more hands-on things such as tool demos, coding and some actual testing. It could be anything.

So the TestBash Brighton open space will essentially take the form of an unconference. There will be no schedule. Instead we, and I really do mean we, all attendees, will create the schedule in the morning. Everyone will have the ability to propose a session, in doing so though, you take ownership of facilitating the said session. Once everyone has pitched their session ideas, we will bring them all together on a big planner and create our very own conference. Depending on the number of attendees we expect to have 5-6 tracks, so lots of variety.

Open Space is the only process that focuses on expanding time and space for the force of self-organisation to do its thing. Although one can’t predict specific outcomes, it’s always highly productive for whatever issue people want to attend to. Some of the inspiring side effects that are regularly noted are laughter, hard work which feels like play, surprising results and fascinating new questions. - Michael M Pannwitz

It really is a fantastic format, it truly allows you get to answers to the problems you are really facing, whereas with conference talks you are always trying to align the speaker's views/ideas to your context, with this format you get to bring your context to the forefront.

Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Kim Knup
20160420 dsc 2931

Kim is Senior Digital Tester at Legal and General, co-organisers of the Brighton tester meet-up; #TestActually and event host for the Brighton Software Testing Clinic. She is passionate about usability and likes to do what the user (apparently) would never do.

Over the years she’s worked in linguistic games testing, and worked with big data archiving and asset management tools as well as recruiting and leading a small team of testers. Her main interests are usability testing and using automation tools to aid exploratory testing.


Conference
Friday, 5th April 2019

How do you change culture, mindset, and skills in a global organization entrenched in practices that were outdated 20 years ago?

One small, frustrating step at a time.

In this talk I'll share my experiences working at a Fortune 10 company where I helped small teams of testers on three different continents dramatically change how they helped their projects deliver value to the company. I'll talk about dealing with people (NOT RESOURCES!), navigating ways through corporate bureaucracy and fifedoms, and most importantly how to get advocates at levels that can actually help you with change.

This talk will be full of abject failures we suffered, but also highlight some of the amazing changes we saw over a three year period.

Jim Holmes
Jiminbulgaria 1200x800 Jim is the owner/principal of Guidepost Systems which lets him engage directly with struggling organizations. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from start ups to Fortune 10 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

The Machines are on the rise! Everybody is aware of the ever-growing presence of machine learning algorithms all around us. The Machines will soon rule the world. At least that’s what the vendors try to tell us.

Every piece of software that claims to provide a solution, should also be testable to prove that it actually is a solution for the given problem. When it comes to testing of machine learning the excuse is often: "We cannot test this, it’s ML!“ It’s time to do something against that!

Not too many have taken the time yet to understand what machine learning actually does, and how to test these algorithms?

In this talk I want to give the audience a chance to make the first step, and understand the very basics about Machine Learning algorithms based on the "Hello World“ example of Machine Learning. The handwritten digit recognition algorithm for the MNIST database. 

Every tester needs a good model to understand what to test. I will explain the basics of the mystical tensor layers, and we take a glimpse into them, so that you get an idea of what actually happens in there.

We will touch the topics of how testing works in this and some other examples and how observability and traceability look like. And we will take a look at a few recognition failures to understand why machine learning is not a black and white pass/fail scenario anymore and why some of the approaches to testing that we used for decades start to fail.

Patrick Prill
Pappy 1000 Patrick has over 15 years of experience in software testing. After working for 10 years in the same big project but in nearly every role there was from tester to test manager, he became test team lead for a small team in a product and consulting company for the automotive industry dealing with projects all over the globe. For more than two years now he is working as a test consultant for QualityMinds supporting different clients with his hands-on experience. In 2017 Patrick organized the first TestBash in Munich, and retired from that job immediately after. Patrick is living outside of Munich, Germany. In his little spare time he continually tries to improve his skills as wood turner and carpenter.

This is a personal account, a transformational story.

Over the last 2 years I've been involved more and more with Technical Testing; performance, security, automation. I've found that it's only been by Owning my Craft that I've been able to provide the best work I can possibly produce for my company and for the testing community at large.

Software Testing is my Craft; it took me a long time to teach myself that it's okay to love my Craft. It's okay for me to want to better myself to make my work better, to be the best Software Tester I can be.

In 2016 I was thrown into the world of API's, Microservices and Containers - none of which I had real (if any) experience with.

But here I was, the 'most experienced' Software Tester at my company. I needed to do something, right? I needed to step up and set a good example.

The development team even gave me a choice (though they may not realise it) that I could wait until there was a traditional front end to perform functional testing against or I could immerse myself in the backend of this project, testing from the ground up, adding value and strengthening the software creation process for this project.

So I jumped...

This talk will highlight how I learned more technical skills than I thought I could within a very short space of time, how I took full creative control of the Testing on that project and elevated myself beyond my own (and probably the development teams) expectations with my work. By Owning my Craft.

 

Mike Smith
My pic.jpg 2018 09 12 15 16 07 Born and raised in Blackpool I've had some seriously unique jobs (making Blackpool Rock, 999 Call Operator to name just two) but I always had a passion and fascination with technology and computers. Moving to Nottingham with my infinitely better half I settled into an IT Support job, that led to an accidental promotion to Testing. From there I've been Tester, Test Team Lead and more recently Principal Test Engineer. This career has reminded me of my passion for technology. Now as an R&D Architect for Ideagen Software I have the freedom to start giving something back to a community and craft that has given me a lot. Allowing me to pursue my love of technology and also my recently discovered love of teaching.

At our company we do things at scale; to give you an idea of the scale just one of our several hundred APIs will serve upwards of 1,000 requests per second and we get 100 million page requests a day. We're an international business with 16.5 million active customers and we shipped 1.7 million parcels last Black Friday.

We have over 200 QAs, split over six locations, and fostering a sense of a community that fits everybody’s needs is difficult. That isn’t to say we haven’t tried. This session will talk about our journey towards where we are now in terms of building a community of practice and the lessons we've learnt along the way that can be taken on board by communities of any size. We'll also discuss some of the key benefits we get from having a thriving community of QAs.

The key points of our talk include: 

  • How and why certain types of community don't always work. We’ve tried various community models, and for a number of reasons some of these have failed.
  • It’s okay to fail and re-evaluate - if what you have isn’t working (such as one large all encompassing community), then change it. It's all about iteration!
  • Common problems we faced (and that you may face) when building a community and how we have overcome them - e.g. lack of enthusiasm (but passion is infectious and now we hire people with that in mind) & work and time pressures (getting line manager buy in and leadership recognition)
  • Different events that we have held and the outcomes of them - What events proved popular, and what events proved not so popular and why.
  • Things you can do to set up your community(ies) for success - Things like hiring the right people, building a brand, making it as fun and inclusive as possible.
Lindsay Strydom
9880899 Reformed Luddite and accidental QA with over five-years of experience in testing e-commerce native and web applications. Queen of the gif and most active slack user in the company (but not a slacker).
Gareth Waterhouse
Me Seasoned QA with a decade of experience, father of two and Sunderland fan (please don't hold that against us). Regularly attends QA/Testing meetups and has given numerous presentations at meetups. Always looking at ways to develop others. Regularly blog about things that I think may be of interest.

Did you ever wonder how to improve your testing skills? Well, I did. I wanted to learn where I stood in terms of my testing knowledge and at the same time improve my exploration and automation skills. Maybe dive deeper into special areas like security or accessibility testing. I read and thought a lot, but what I felt I was missing was hands-on practice. So I decided to run an experiment.

 

My hypothesis: “I believe that pairing and mobbing with fellow testers from the community on hands-on exploratory testing and automation will result in continuously increasing skills and knowledge as well as serendipitous learning. I’ll know I have succeeded when I noted down at least one concrete new insight or applied one new technique per testing session and shared that with the community.”

 

In this talk, I will share the lessons learned on my journey as well as tips for doing pair testing sessions yourself. Let’s uncover if my hypothesis proved true, that a testing tour is indeed a feasible and valuable way to improve your testing knowledge and skills!

Elisabeth Hocke
Lisihocke Having graduated in sinology, Lisi fell into agile and testing in 2009 and has been infected with the agile bug ever since. She’s especially passionate about the whole-team approach to testing and quality as well as the continuous learning mindset behind it. Building great products which deliver value together with great people is what motivates her and keeps her going. She received a lot from the community; now she’s giving back by sharing her stories and experience. She tweets as @lisihocke and blogs at www.lisihocke.com. In her free time you can either find her in the gym running after a volleyball, having a good time with her friends or delving into games and stories of any kind.

Could exploring diverse disciplines and industries lead to dramatic improvements in your effectiveness as a tester?

The craft of testing has its origins in the social sciences. The characteristics that we seek to develop as testers include communication, creativity, critical thinking and curiosity.  

Typically we focus on metaphors from engineering and manufacturing and learn about the disciplines of these industries.

There is so much to gain by learning about other industries such as Aviation and Health Care, and studying other disciplines, for instance, heuristics originated from behavioural economics and mind mapping was developed over centuries by philosophers and psychologists.

In this talk, I will share my journey of being transformed from a regimented confirmation tester to a context-based exploratory tester.

These changes occurred by the challenges faced by testing software in different industries and more importantly studying non-engineering disciplines.

Conor Fitzgerald
Webp.net resizeimage %282%29 Based in Cork, Ireland. Software tester with over 10 years experience. I love testing and continuously work on improving as a tester. Experience gained through a variety of testing roles in a wide variety of industries from embedded systems to financial systems with companies ranging from startups to large multinationals like Intel. I am the co-founder of the Ministry of Testing Cork. My hobbies include kayaking, hill walking, gym, Toastmasters and yoga.

Software testers often seem to feel intimidated by security testing. It seems too technical, there’s so much to learn, and where the hell do you start? How do I even know if something is a vulnerability? How do I incorporate all this into my testing? Penetration testers are viewed as the technical elite with their hacker mindsets and cool tools, and laissez-faire attitude to digital boundaries. But our two professions have so much more in common than you might think, we are two sides of the same coin. We can learn from each other, and software testers already have many of the skills which apply to security, it need not be left only to the hackers.

This talk will bring together Jay, a pen tester, and Claire, a software tester, to talk about the things which unite us, both human and technical, the common challenges we both face (will we be automated out of our jobs?), and the language which brings us together yet also causes no end of problems.  We’ll show how testers need not fear security, challenge the perception of pen testers, and how testers can apply their existing skills to start to think about security while working in their teams, and champion security in the companies they work in. 

Jahmel Harris
Me crop bw

Jahmel (Jay) is a security researcher and hacker. He co-founded Digital Interruption last year; a security consultancy which helps secure organisations with a mix of penetration testing and helping to embed security into application development pipelines. With a background in not only security testing but software development, Jahmel is able to advise engineers on balancing security with functionality.

Jahmel has a particular interest in mobile application security, reverse engineering and radio and has presented talks and workshops at home in the UK and abroad. He also runs Manchester Grey Hats – a group aiming to bring hackers together to share knowledge and skills.


Claire Reckless
Img 3158

Claire is a Test Lead at MoneySuperMarket in Manchester, with prior experience in testing Financial and Security software.

A tester for over 10 years, she is active within the testing community, contributing articles, speaking at conferences including Testbash Manchester and Nordic Testing Days, as well as co-hosting Software Testing Clinic Manchester every month.


We as humans are incredibly crafty and resourceful. From the dawn of time, survival has depended on our ability to navigate the world through schemas and shortcuts in an effort to prolong our ability to exist. Over time we built up skills and habits in an effort to protect and preserve ourselves from the harsh elements. This evolutionary adaptation is nothing less than an amazing accomplishment, a true nod to Darwin who gave light to this survival tactic!

And it was! But now we are in year 2019. Conditions are a tad bit different than they were back in the day, and adaptation has changed these skills and survival mechanisms into biases. How do we build stronger communities in a time where borders are only meant to retain legalities, and our lives are increasingly benefitted by the mix and match of the diversity around us?

In this talk, I will address diversity by decoding bias, analyzing our current pitfalls in trying to retain a diverse culture, and provide some simple heuristics to help navigate the space that is Diversity and Inclusion.

Ash Coleman
Ashcoleman

Ash, a former chef, put recipes aside when she began her career in software development, falling back on her skills in engineering she acquired as a kid building computers with her brother. A progressive type, Ash has focused her efforts within technology on bringing awareness to the inclusion of women and people of colour, especially in the Context-Driven Testing and Agile communities. An avid fan of matching business needs with technological solutions, you can find her doing her best work on whichever coast is the sunniest. Having helped teams build out testing practices, formulate Agile processes and redefine culture, she now works as an Engineering Manager in Quality for Credit Karma and continues consulting based out of San Francisco.


Development and deployment contexts have changed considerably over the last few years. The discipline of performance testing has had difficulty keeping up with modern testing principles and software development and deployment processes.

Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the “accuracy” of simulation and environment considered critical to the value of the data the test provides. But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment? How can you effectively performance test in continuous integration to continually collect feedback on the project’s performance and scalability characteristics?

Eric will deconstruct “realism” in performance simulation, talk about performance testing more cheaply to test more often, and suggest strategies and techniques to get there.

Takeaways

A formulation of the risks we performance test to learn more about.

Techniques for Performance Testing in Continuous Integration.

Strategies for load models that can yield information across builds.

Eric Proegler
Eric proegler headshot

Eric Proegler has worked in testing for 20 years. He is a Director of Test Engineering for Medidata Solutions in San Francisco, California.

Eric is the President of the Association for Software Testing. He is also the lead organizer for WOPR, the Workshop on Performance and Reliability. He’s presented and facilitated at CAST, Agile2015, Jenkins World, STARWEST, Oredev, STPCon, PNSQC, WOPR, CodeFest, and STiFS. He podcasts about Software Performance with the PerfBytes crew at www.perfbytes.com.

In his free time, Eric spends time with family, runs a science fiction book club, sees a lot of stand-up comedy and live music, seeks out street food from all over, and follows professional basketball.