Beyond Company Reports
Michelle Woodyear
Beyond Company Reports: Key Takeaways from the LMA 2026 Pre-Conference on AI, BI, and CI
Takeaways from "Beyond Company Reports: AI-Powered and BI-Enriched CI for Strategic Impact" at the LMA 2026 Annual Conference Pre-Conference Program, AI, BI, CI: The ABCs of Data and Law Firm Intelligence.
Panelists: Amy Wisinski, Heathcock Marketing (formerly CI Director at Winston & Strawn); Matt Bloom, Associate Director, Proskauer Rose; and Nathalie Noel, Jenner & Block.


Five Key Takeaways
1- If your CI function's value is measured by report volume, AI is coming for it.
The panel was direct about this: if the answer to "what's the impact of your CI function?" is "we produced 400 reports last year," then what happens when AI can do the same work faster? The path forward is moving from tactical report production to strategic analysis and recommendations.
2-CI has been stuck in tactical mode for structural reasons, not just resource constraints.
Many CI professionals entered the field through information science or research backgrounds, and the work naturally gravitates toward data gathering. Performance reviews reward output volume. Budget and staffing constraints leave little room for strategic work. And firms haven't asked for more, because the tactical work has served them well. Until now.
3-AI doesn't replace the CI professional. It replaces the parts of the job that kept CI professionals from doing their best work.
The panel demonstrated how AI can automate company report generation, executive movement monitoring, and competitor trend analysis, freeing CI teams to focus on interpretation, strategy, and the "so what" that turns information into action.
3-The human in the loop is not optional.
Every use case the panel walked through came with the same caveat: AI outputs require validation. Data platforms get entity matching wrong. AI tools give different answers to the same prompt. Clients are opting out of having their data fed into AI systems. And a high-ranking partner who assembles his own AI-generated report and sends it to CI for validation is a scenario that's already happening.
4-The tools are powerful, but the data underneath them is fragile.
Entity matching failures, inconsistent industry taxonomies across practice groups, disconnected internal systems, and ethical walls that create incomplete data sets all complicate AI-assisted CI work. The panel's advice: become friends with your privacy and information security teams before you need to.
Meet the Panel
Session 1 established that competitive intelligence is a discipline for everyone. Session 2 asked: if that's true, what are the people who do it full-time doing differently? How are AI and BI tools transforming their workflows? And what does CI look like when it moves from delivering data to delivering recommendations?
Amy Wisinski, formerly the CI Director at Winston & Strawn and now with Heathcock Marketing, moderated and opened with a foundational look at where CI has been and why it's been stuck there. Her framing set the tone: CI needs to evolve from report production to strategic advisory, and AI is both the catalyst and the threat.
Matt Bloom is an Associate Director at Proskauer Rose and, as became clear throughout the session, an expert on nearly every CI tool in the legal market. He brought deep technical knowledge on data sources, platform limitations, entity matching challenges, and how to build competitor analysis workflows with AI.
Nathalie Noel is at Jenner & Block and brought the perspective of how the CI professional's day-to-day work is changing. She demonstrated techniques her team has built, including custom search streams using keyword banks and Boolean logic generated by AI, that represent the kind of practical workflow innovation happening inside firms right now.
From Report Producer to Intelligence Strategist
The panel opened with an honest assessment of where most CI functions sit today. The work is overwhelmingly tactical: receiving requests from BD teams or partners, pulling information from internal and external databases, repackaging it into digestible formats, and sending it back. The tools are databases and templates. The outputs are company reports. The success metric is volume.
This model has served firms well. Company reports, pitch prep, meeting briefings: these are important parts of the sales enablement process. But the panel argued that this model has also kept CI locked in a reactive posture. Not much time is spent on strategic work: understanding market positioning, identifying where the firm's share of a practice area can grow, supporting lateral recruiting with market intelligence, or helping leadership answer the bigger questions about where to invest.
The reasons are structural. Many CI professionals came from information science or research backgrounds, and the work naturally gravitates toward gathering and organizing data. Budget, time, and staffing constraints leave little room for anything beyond responding to the queue. And performance reviews tend to reward what's measurable: report count, turnaround time, request volume.
The provocation the panel put to the room: if your value is defined by report volume, what happens when AI can produce those reports in a fraction of the time? The answer isn't to compete with AI on speed. It's to move up the value chain, from delivering information to delivering insight, from answering requests to shaping strategy.
The slides captured this shift as a before-and-after. The "then" CI professional responds to requests on demand, compiles data from directories and databases, produces templated PDFs, works in a CI silo, delivers information, and is measured by output volume. The "now" CI professional anticipates information needs proactively, synthesizes multi-source intelligence, designs automated workflows and alerts, connects CI with BD, KM, and analytics, delivers decisions and recommendations, and is measured by business impact. The skill set has expanded too: AI fluency, data literacy, workflow design, and stakeholder communication are now part of the job.
Three Use Cases: What AI-Powered CI Looks Like in Practice
The core of the session was three detailed use cases showing how AI is changing CI workflows from the ground up.
Executive movement monitoring. This use case walked through a four-step approach to turning executive changes into business development opportunities. The signals are high-value: a new GC means a potential shift in outside counsel. A new board member with M&A experience may signal deal activity. A new chief risk officer or cybersecurity hire suggests the company is focusing on risk and compliance.
Step one: use AI tools to capture alerts on executive movements across your client and prospect universe.
Step two: cross-reference those alerts with your internal systems, your CRM, experience management system, and relationship data, to identify existing connections.
Step three: draft a CI-vetted summary for the relationship partner within 24 hours, because speed matters and the firm that reaches out first after a trigger event has a significant advantage.
Step four: include a recommended next step, not just the information. The summary should include a quick profile of the executive, a recent development that may be top of mind for them, the firm's existing connections (highlighting the strongest relationships), and a specific suggested action.
The panel noted that what used to take an hour or two of juggling multiple tabs and systems can now be done in roughly ten minutes with properly configured tools and workflows.
Competitor trend analysis. This use case addressed one of the broadest and most common CI requests: tell us what our competitors are doing. The panel's approach was to narrow the scope before touching any tool. Define which firms you're tracking. Identify your trusted data sources: hiring trend aggregators, deal databases like PitchBook and Capital IQ, firm websites, and industry publications. Feed those specific, scoped sources into AI rather than asking it to search the open web.
The key shift is from "who" to "what" to "so what." AI handles the synthesis: by practice area, tell me which firms are growing, where they're hiring, what deals they're winning. The CI professional handles the interpretation: what does this mean for our positioning, and what should we do about it? The panel emphasized that this kind of project used to stall because the scope felt impossibly wide. AI provides the jumping-off point, and iteration with the requester narrows it to something actionable.
Litigation tracking and early warning signals. Drawing from the slides and discussion, this use case showed how CI teams can move from reactive case monitoring to proactive intelligence. The workflow: ingest data from sources like PACER, CourtListener (free), state court portals, Bloomberg Law, and Lex Machina. Use AI to summarize dockets at scale, flag pattern changes, and identify client stress signals or competitor counsel activity. Surface the results through a BI dashboard showing litigation trends by client, sector, and geography. When an alert fires, AI generates a brief narrative linked to a CRM opportunity, ready for a partner to act on.
The strategic value: detecting client financial stress before they call a competitor, identifying practice expansion opportunities by sector or geography, and moving CI from "here is the data" to "here is what this means for your practice."
The Data Reality: What AI Doesn't Fix
The panel devoted significant time to the problems that AI amplifies rather than solves.
Entity matching is unreliable, even within a single platform.
One panelist described exporting a list of companies from PitchBook, then using PitchBook's own Excel plug-in to match those companies back to PitchBook IDs, and getting different results. The platform prioritizes stock tickers, which creates mismatches for private companies and subsidiaries. If a tool can't consistently match entities within its own universe, the challenge of matching across multiple internal and external systems is exponentially harder.
Internal data is inconsistent.
Three practice groups using three different industry taxonomies. Company names that don't match because one system uses a former name, an acronym, or a subsidiary label. Billing data that defines "top client" differently depending on whether you're measuring fees billed, hours worked, or realization. The panel reinforced what Session 1's speakers had raised: without a common data dictionary and ongoing data governance, AI outputs built on internal data will produce inconsistent and sometimes misleading results.
Ethical walls and client opt-outs create incomplete pictures.
Some clients are explicitly opting out of having their information fed into any AI system, even when firms assure them the data won't be used for training. This means CI teams may have to work with deliberately incomplete data sets, and AI tools can't be told "use this data but ignore that part." The panel strongly recommended building relationships with privacy and information security teams early, before an incident forces the conversation.
AI gives different answers to the same question.
During the tabletop exercise, attendees ran the same scenario through ChatGPT and Claude. ChatGPT recommended pursuing a client expansion opportunity. Claude recommended prioritizing a defensive client protection move. Both gave well-reasoned justifications. Both were plausible. Neither was definitively correct. The takeaway, which the panel made explicitly, is that AI tools can guide, suggest, and surface options, but they cannot make the final strategic decision. That's the CI professional's job. And it's the strongest argument for why the function can't be automated away.
Transformational Takeaways by Firm Size
The session closed with practical recommendations scaled to firm size.
For any firm, any budget: replace one templated report with a trigger-based alert plus an AI-generated brief. Build a prompt library for your most common request types. Add a confidence label to every AI-assisted deliverable so recipients know what's been validated and what hasn't.
For mid-size firms: connect free litigation monitoring tools like CourtListener alerts to a BI dashboard. Establish a CI validation checklist before any AI output reaches partners.
For enterprise firms: deploy entity normalization before scaling any AI intelligence program, because bad matching at scale is worse than bad matching on individual reports. Build a governance framework defining what AI can and cannot produce without human review. And position CI as a BD intelligence hub, not a report factory.
This post is part of a series from the LMA 2026 Annual Conference Pre-Conference Program, "AI, BI, CI: The ABCs of Data and Law Firm Intelligence," co-chaired by Michelle Woodyear (Mount Insights), Ashley Elliott (FBT Gibbons), and Rafeedah Keys (Perkins Coie). The e-book "The AI-Enabled Legal Marketer" and the guide "Free CI Research Tools" were shared with attendees of the pre-con session.
