Acquisition

Discovery BPA Evaluation Process: How to Operationalize this Practice

Oct 26, 2020

At the Technology Transformation Services (TTS), we take pride in iterating everything we do and using cross-functional teams in all aspects of our work. Utilizing modern techniques and emerging technologies in our own work has given us a great deal of insight into what it takes to acquire services and products from the contractors that have the technical expertise to deliver the same to our agency partners.

After our first experiment with a Blanket Purchase Agreement (BPA) through our experiences with the Agile BPA (aBPA), 18F Acquisitions subject matter experts (SMEs) and their counterparts at the IT Modernization Centers of Excellence (CoE) developed the Discovery BPA (dBPA). While serving a different purpose, many of the lessons learned from the experience of establishing the initial set of agreements and awarding the task orders on the aBPA were incorporated into the solicitation that resulted in the dBPA.

With the first awarded task orders under the dBPA underway and a retrospective on the solicitation completed, the evaluation process itself became something the acquisition workforce found worthy of a repeatable process. This guide should allow any acquisition project team to adapt our process to their specific organization’s requirements, at any level of government.

This guide will allow a team to: (1) lay the foundation; (2) use the right tools; (3) use a draft Request for Quote (RFQ) as a Request for Information (RFI); (4) run the evaluation itself; and (5) adapt dBPA best practices to their own procurements.

Lay the foundation

We were able to lay the proper foundation for this kind of approach by using proven techniques and attitudes from foundational Agile ideas we incorporated into practices that we use. Most importantly of those practices: iterate often in order to make sure you incorporate the latest lessons learned by your team. This ensures your current practices are never outdated.

Product thinking keeps us constantly moving toward the right outcomes for our users. Having your team consider procurement packages as their products will help them remember what they are working so hard to achieve. This attitude feeds into using a user-centered design focus, which will help understand the true value your partners seek.

Cross-functional teams are key for a successful program, bringing in all the needed players early on. This is especially necessary when implementing a DevSecOps culture to build robust solutions that a team can test early and often.

Supporting this type of team requires working with acquisition experts to keep contract option periods, or contracts themselves, scoped appropriately to represent specific phases. This type of modularity allows teams to build a program with the right contractors for each project. 

Use the right tools

There will be a range of different tools your team can use to build procurement products - because that’s what processes that use standardized documents truly are. The key to successfully developing your offerings is using the right tools for you.

For internal tools, we used a collaborative software suite for both the aBPA and dBPA, allowing us to work on documents, sheets, and forms together, remotely, and in real time. Additionally, by using features that allowed certain components of each application to interact with another, we streamlined a large portion of our work - and even automated some of it - using existing features. One of the most important parts is that the process itself can be adapted to platforms provided by other industry leaders, so it is repeatable as well as scalable across the government.

As those tools worked best for mostly internal use, the way we communicated with external partners was also important. While our solicitation was available through the most popular government points of entry, we also used social media platforms to advertise the fact our solicitation was available on a publicly available Git-based repository. It was important for us to have the version-control that comes with Git and the transparency that comes with a free and publicly accessible platform.

Finally, while we generated informal artifacts, future procurements can use these tools to generate and share formal artifacts using the 18F Methods that can be shared with industry while also ensuring evaluation factors are created that match the requirements themselves. Whether the output of structured workshops, notes taken during stakeholder and user interviews, or a journey map developed to assist with explaining user scenarios, the more that can be shared, the better.

This evaluation process would not have worked without these tools, as each provided a unique feature that was essential to the way in which the process itself was run.

Using a draft RFQ as an RFI

One of the most successful market research activities our team utilized was using a draft RFQ as part of a RFI. The way our team ran the RFI involved a great deal of logistics and planning, with the key activities we identified for replicating our process being the following:

  • Create questions about the scope as well as evaluation.
  • Use a standard form to collect responses.
  • Post the draft RFQ to a publicly available Git-based repository.
  • Reach out through social media, blog posts, and with the help of your Communications team (at GSA it is the experts over at OSC, the Office of Strategic Communication).
  • Have the entire team collaborate in real time, whether in person or on a video call, with as much time as necessary dedicated to review questions and draft answers (for us, it involved two days at Region 1’s Assisted Acquisition Services team’s Boston offices).
  • Issued communications through OSC and social media to outline the changes made as a result of industry feedback.

Success using this RFI technique meant our expectations for the solicitation would align with industry’s understanding. Using the principles outlined in the Myth-busting #4 memo by the Office of Federal Procurement Policy (OFPP), as well as our own experience with previous buys, we specifically asked industry for comments on our evaluation process. While we requested information on other aspects of our solicitation, when it came to the evaluation, our questions were very focused.

We asked specifically about the impact our proposed evaluation process would have on the ability of a business to showcase their technical skills. In particular, we asked for suggestions that would allow us to strengthen the solicitation itself in order to encourage participation of companies that had innovative technology that could meet our factors, regardless of their size. We opened up everything in our RFQ to industry feedback, not just the objectives, tasks, and deliverables.

There were two very important changes that we made to the RFQ based on the feedback we received that had a major impact on our evaluation process:

  1. We originally required contractors submit a response for a minimum of three out of the seven functional areas, but we changed this to one so businesses of any size who are real experts in one particular field could still compete.
  2. We used a technical challenge and scenario question, combining the technical demonstration and down-select practices cited in FAI’s Periodic Table of Innovations, to make sure we could test teams without using oral presentations.

The evaluation itself  

If we break down the evaluation into numbers, we received 315 questions that we answered with over 90 total submissions by offerors. Considering the size of our team, the deadline set by leadership, and the demands of our daily professional obligations, we found our process to be successful in that we had around 50% of the responses pass through each phase.

Everything we did in our evaluation had been done before, but perhaps not all together at once. We point this out because this is innovative, not an invention, meaning it is backed by the FAR and GSAM. The “new” process included the following three-phased approach:

  • Phase I: Technical Challenge
  • Phase II: Scenario Question
  • Phase III: Technical Volume (consisting of technical and management approach as well as an emphasis on similar experience rather than past performance)

The first thing we did, which affected all three phases, was removing the barrier to entry. Originally, we set it up so that contractors would need to respond to a set number of functional areas. By removing that requirement, allowing contractors to submit a response to just one functional area, we believe that we permitted a larger number of small businesses to participate - and end up with an agreement on the dBPA - that would not otherwise have been involved. In doing so, we were able to beat our own small business goals without using a set aside while also ensuring some of the most cutting-edge businesses could participate.

The Evaluation Timeline

Phase and Time Notes
Solicitation close: 1 day Collect all responses, by phase, to create templates and “Mad Libs” for evaluators.
Phase I: 7 days Responses were provided in 48 hours with the questions set to mimic a real-world issue we face.
Phase II: 8 days Evaluate the scenario question, again, with an “answer guide,” testing the ability to make “it” stick.
Phase III: 12 days Run through the technical proposal and management approach with an eye on similar experience.
Award: 22 days Using the Google Sheets that had all the responses, generate all award documentation and brief FAC.

Phase I - Technical Challenge

We knew we would be receiving a high volume of responses and we also had a very tight timeline. As a result, our preferred approach of asking for an actual proof of concept using a dataset available on data.gov along with an oral presentation where the government and industry partners could talk through the approach was not feasible.

That is why we used a standardized form (links to all of them in the RFQ Amendment), to ask a specific technically-focused question, with a half-page response limit, given to industry for a limited period of time, which became our Technical Challenge. Our technical SMEs had an answer key that provided an objective set of terms and concepts that had to be mentioned in a contractor’s response. If we had time for a demonstration, we could see if they knew how to apply the concepts, but simply knowing the right concepts was what our technical SMEs who developed the questions and answers sought in this phase with our existing circumstances.

The Technical Challenge was a great hit with industry and evaluators alike, and a great way to “vet” our potential contractor pool. The half-page limit ensured we didn’t end up with buzzwords or vague ideas, but rather, focused responses that skipped out on the sales pitch. Our evaluators have been happy with the mix of labor categories (LCATs) available to them, with the reasoned thought behind them being apparent in the responses to our order-level RFIs and RFQs.

To find out what we presented to contractors for yourself, we included sample challenges on our dBPA repository.

Phase II - Scenario Question

While the same concerns related to volume of responses and time to evaluate them remained, the Scenario Question benefited from many of the techniques that made the Technical Challenge a success. What the Scenario Question sought to address was how contractors would swing the pendulum from one end to another, implementing change management in a hypothetical example of intense resistance. The human element involved with any services procurement had to be considered and that is what the answer to the question sought to evaluate.

The response to this question utilized a form-based question, with a one-page response limit, given to industry to consider during the entire evaluation. With a similar format to the answer key as was used in the Technical Challenge, the main difference between the two was the Scenario Question’s focus on evaluating how a contractor would react if they have a longer period of time to think through their approach, something they didn’t have with the Technical Challenge.

The Scenario Question was received well by both the contractors and the SMEs in that it showed the importance of the soft skills to our team’s success. While we wanted the best technical teams, we also wanted to know the knowledge and services provided would be transferred to our agency partners - and the knowledge transfer would stick. Our evaluators have been happy with the emphasis all the dBPA contractors placed on the idea of transitioning their work to another team once a project was completed. To find out exactly what we presented to contractors, we included a sample scenario on our dBPA repository.

Phase III - Technical Volume

There will always be a need for a certain amount of written material. This includes information related to how a team manages their technical resources, their hiring and training processes, as well as their overall experience with the work required. While still maintaining a limit in terms of the number of pages that necessitated clear and concise language, we tried to give contractors the opportunity to showcase their individuality. This was something that didn’t come through during the first two phases, which required responses which looked completely the same due to the use of standardized forms.

One of the factors in our Technical Volume that encouraged participation of companies who could provide emerging technology was a focus on similar experience rather than past performance. Many of the leaders in emerging technologies have experience in the private sector but have not yet broken into the public sector. As one of the main goals for the dBPA was to help agencies find paths towards implementing emerging technologies into their infrastructure, finding those who are able to provide us with private sector best practices in our public sector use cases (something we were evaluating during those first two phases) was key.

Evaluation logistics

To operationalize these evaluations, we had to be prepared on the back end to organize and manage administratively each step of the process. The main areas we focused on included: 

  • Kickoffs: Starting every phase with a kickoff, unique to that phase, was crucial. Not only did it ensure every evaluator was focused on that specific portion of the contractor’s response, but it gave evaluators the opportunity to keep from being overwhelmed by the totality of the responses.
  • Reviews: Having the team at R1 review evaluators’ responses for consistency and objectivity after each phase was incredibly important to ensure timelines did not slip and that responses were thoroughly evaluated in accordance with the procedures and factors in the RFQ.
  • Collaboration: Because of the close collaboration between R1 OGC and R1’s CO, everyone knew exactly what was required and expected of them in terms of documentation and compliance.
  • Schedules: Evaluators’ calendars were blocked off and regular meetings were held between the acquisition team and the evaluators as well as among the acquisition team itself in order to discuss progress and timelines, unique issues being faced by individual evaluators, and to pivot and prepare for future phases.
  • Vetting: The new process itself had to be vetted, as there wasn’t enough time to experiment with it or to issue a second RFI. As a result, relying on the cross-functional teams at our organization as well as other industry partners who work with us in similar engagements was important.
  • Sustainability: Without the support of supervisory staff, it would have been very difficult to make our deadlines considering the work required to properly evaluate these responses. Additional time for those teams who wish to replicate this will be required without this support.

Best practices

There were three ideas in this RFQ that we intend to repeat and incorporate them into future procurements, when feasible:

  1. Provide standard forms that use templates for evaluators to submit their responses, allowing for a standard and consistent response format, while making it easier for the team to review those responses and copy/paste them into other documents as needed.
  2. Create a “Mad Libs” style answer guide structure for evaluators to follow, allowing them to focus their evaluations on the aspects of the responses most important to the evaluation itself and providing the acquisition team the ability to obtain justifications that didn’t require follow ups for more information.
  3. Use a team of technical experts from the project itself as evaluators, and where unavailable, projects that are impacted by the buy, ensuring they are dedicated to the process and truly invested in finding the best fit.

In order to make sure our evaluators were able to successfully complete their evaluations while managing their day-to-day work, the following three steps were key:

  1. Eliminating technically unacceptable contractors at the start provides the evaluator and acquisition teams the ability to focus and avoid fatigue.
  2. Ensuring evaluator’s supervisors understand the workload being placed on the individuals themselves so that they can prioritize the evaluation over their usual activities and find the best contractor for the requirements is an investment in the long term success of the project.
  3. The tighter the timelines, the greater the expertise required from the individuals on the entire team, regardless of whether they are on the project or acquisition side of the house. Kickoffs and reviews between phases shouldn’t be sacrificed and must be incorporated into timelines.

Some final thoughts

The tools and process we used in our evaluation allowed us to have one final best practice that was lauded by contractors and our team alike: the debrief letter. Our debrief letters outlined not only the entirety of what we evaluated, but how we evaluated those factors, using objective measures, when reviewing the response of that particular contractor.

By using a standard template, much like we did with our other documents in the evaluation process, we created a repeatable process for the more than 90 contractors who answered our RFQ. The responses gathered from form-based responses allowed us to quickly and easily populate the template debrief letter. Because of the use of Mad Libs to generate those evaluations, information provided was consistent and substantive regardless of the functional area or evaluator, giving industry the type of information that we hope helps them obtain awards with us in the future.

As always, we love feedback on everything we do, so would appreciate any comments or suggestions to improve this guide, especially if they are based on lessons learned from applying this yourself!