At Nucleus, we understand that vulnerability management is a continuously evolving landscape. What was normal two years ago is in many ways different to a year ago, which is in turn different to how we do vulnerability management today - and it’s going to keep changing.
Beyond the breadth of technologies that we need to look after and the ever-changing vulnerability data sources we ingest, it impacts the very fabric of our vulnerability management programs. What we do with data once we have it and ultimately how we prevent bad actors from harming our organizations and in turn our customers are increasingly complex.
That’s the reason we’re extremely excited to have partnered with Google Chronicle on our first purpose-built SIEM connector - opening a world of new use cases for correlating vulnerability data alongside security logs and alerts.
Nucleus is obsessed with precision and terms like “scalpel-like accuracy” for a reason. In a scaling VM program, you need accuracy and precision in your data, decisions, and workflows. With this connector, Nucleus customers can now achieve that precision in their incident response workflows by synchronizing Nucleus-enriched vulnerability data and asset data with the associated risk attributes to their Google Chronicle instance. Vulnerability data sent to Chronicle is asset aware too, giving you all of your asset’s organizational context such as asset groups and risk attributes right alongside the vulnerability data.
This integration enables SOC and incident response teams to (among many other situations!) do their investigations of an incident without ever needing to leave their SIEM! They can see the asset’s business context, all vulnerabilities, risk, and threat intelligence correlated to an asset for use in their investigations from a single console. This data is now all at your fingertips alongside your network traffic, SIEM events, and log data. By correlating data from all these different sources and putting them in the hands of your analysts, the incident response process and decision-making process just got better and more scalable.
A major component of a comprehensive and resilient vulnerability management program is to be able to drill down into meaningful insights and drive the outcomes that matter to you and your organisation. Uniquely within the vulnerability management space, Nucleus allows you to see all your mitigated vulnerabilities, and access their history in the vulnerability record itself. This is great for auditing purposes, as well as celebrating the wins with the teams that do remediation work.
For this release, we’ve invested a lot of time in further enriching the insights that you can take from the Resolved Vulnerabilities page so that you understand how and when vulnerabilities were remediated with scalpel-like accuracy.
Previously the Resolved page discovered and remediated filters were limited to 3-month increments. Now you can select any date range to get the view you need when you need it.
We’ve also updated the Resolved vulnerabilities grid to include the number of instances that were mitigated and the number that are still active. This gives you a further level of reporting which, for example, could be used to inform your weekly, fortnightly or monthly vulnerability management reports.
You’ll also notice that the verification method column has been updated to allow you to easily see how the vulnerability was resolved, whether it was fully resolved or only partially resolved, and report against that. This also helps you to easily search for the verification attribute of the vulnerability and report on all Scan mitigated vulnerabilities, or vulnerabilities that were only partially mitigated via a scan.
The Tenable.io connector has had some under the hood upgrades to make it future ready. This release includes a migration away from the legacy v1 API to the Tenable WAS v2 API and a new option in the Import via Connector page to import just Web Application scans:
You can now also optionally ingest asset additional metadata in to Nucleus, and push asset (and metadata) from other sources back to Tenable.io for a more complete picture of asset information in Tenable itself. This will allow you to use the asset data coming in from 3rd parties and sync those unscanned assets back into Tenable.io so that you can automatically get the unscanned asset list back into your scanning system. This should help with scan coverage issues in enterprises where it can be difficult to reconcile your scanned assets with what you own.
To enable this, head over to the connector setup page, edit your Tenable.io connector and select the sync asset metadata options:
And now for something completely different… to how our existing connectors work. If you have your Tenable.io instance setup to interact with Azure, GCP, AWS, McAfee EPO, ServiceNow or BigFix, we’ll also pull in all of the associated metadata and store it as though it came from the respective source. This means that your existing asset processing rules that leverage data additional metadata from other sources (for example, aws.account-id) will automatically apply to instances scanned from Tenable.io!
Applying your asset context to filter all vulnerability pages in Nucleus is now even easier! Since the very early days of Nucleus, the Asset Filter button has enabled you to narrow down the vulnerabilities to show only vulnerabilities of interest by applying asset group and asset type filters and keeping them persistent from page to page.
With this release we’ve updated this button to show a popup, so that you can create more complex queries around your filters, making filter selection better and faster (hint, you can search for groups instead of scrolling through a list).
Once you’ve applied your filter, it’s displayed at the top of each page so you always know the context that you’re looking at and can alter if you need any last-minute changes:
With every release, Nucleus improves your ability to automate your vulnerability management program at scale. On top of the additional metadata now being ingested from Tenable.io, this release includes updates to both the AttackForge and Netsparker connectors. This new support for metadata will allow you to use these fields in the Nucleus Automation Framework and start scaling your VM processes. Below is a complete list of keys that will be ingested from these tools for use with the Nucleus Automation Engine:
We are always trying to make the API more usable, and with every release, we’re making enhancements to the Nucleus API so that you can get more done faster.
Ability to run an asset processing rule via the API: This has been a hot request, and we are happy to announce that users can now automatically set an asset processing rule to run via the API so that when you automatically create an asset processing rule, you can trigger the rule to process as part of the rule creation.
The GET /asset endpoint now returns all fields: There were certain fields missing from the GET asset endpoint response so that the API docs didn’t match the result. This has been fixed in the latest release.
NEW Connector for Google Chronicle
NEW Org Admin Users in organizations can configure inactive or never logged in user accounts to auto-disable after a set number of days of inactivity
NEW Searchable tags on the “Resolved Vulnerabilities” page for better reporting of resolved vulnerabilities
NEW “Resolved Vulnerabilities” page now has columns for counts of both resolved and still active vulnerabilities for better reporting and tracking purposes
NEW The “Resolved Vulnerabilities” page can now filter vulnerabilities by discovered and remediated using custom date selectors
UPDATE The Tenable.io connector now ingests additional metadata
UPDATE The Tenable.io connector now can now sync asset data back to Tenable.io
UPDATE The AttackForge connector now ingests additional metadata
UPDATE The Netsparker connector now ingests additional metadata
UPDATE Asset filtering now indicates what is being filtered on each page
UPDATE Asset merge and archive actions speed updates
UPDATE Enhancements to the PagerDuty connector and webhook
UPDATE Nucleus can now import the RPM type vulnerabilities from Sonatype
UPDATE Various performance improvements to increase the speed of the application
UPDATE PrismaCloud connector update to parse certain package names better
UPDATES Improvements related to viewing certain pages in Safari
BUGFIX Fixed an issue where, in certain cases, if you filtered by asset group and then made a status change to a top-level unique vulnerability, the status change affected all vulnerability instances of that unique vulnerability, not only instances within the asset group you filtered
BUGFIX Fixed an issue with timezones not being applied correctly for certain custom reports.
BUGFIX In certain limited situations marking scans as auto-imported was not shown with the connector auto-import indicator on the Connector Setup page.
BUGFIX Fortify references now display fully
BUGFIX Other various enhancements and minor bug fixesRead More
Announcing our latest Bug Bounty connector: Synack. With the new Nucleus / Synack Connector, the gap between vulnerability management and crowdsourced security testing is much smaller. You can now easily (and automatically!) inject Synack sourced security testing data into your vulnerability management process so that you can manage both sets of data in the same VM process.
Remember that connector builds are based on customer requests, so be sure to let us know if there are connectors that you’d like to see built, and hit that subscribe button to see when they’ll be coming to your Nucleus instance.
Editing Custom Findings After Creation
We’re committed to improving our ability to deliver on application security and penetration testing workflows for our customers, so in this release, we’ve added the ability to do something that a lot of you have been asking for, which is to edit ALL fields of custom findings once the custom finding has been created.
Previously if you wanted to update Port/Service information for a custom finding, you would have to create a new instance of the custom finding, forcing you to copy and paste data from another instance. But now you can keep your finding record intact while changing the location of the custom finding instance itself.
We’ve continued to make it even easier to extract the specific information you are looking to use in other systems with updates to filtering on both the Active Vulnerabilities and Asset Management pages. One place of note, in particular, is that you can now use a custom date selector to filter the discovered and last seen dates for vulnerabilities on the Active Vulnerabilities page, giving you more targeted metrics for reporting purposes.
For example, do you want to know which vulnerabilities were discovered in your environment before the Mayan calendar ended?
Or how about vulnerabilities that haven’t been scanned for since you last moved out of your parents’ house? We’ve got you. Just use the new custom date range filter.
In this release, we’ve made asset merging a little easier for all. Now when two assets are merged in the UI, we’ll automatically update the new merged asset to include secondary matching information from the asset that was merged in. This means that after two assets are merged, future asset ingests won’t result in the old assets being created again!
This release comes with a slew of improvements to existing connectors – there’s a little of something for everyone.
Some highlights include being able to import all containers, hosts, and deployed images in one go from Prisma Cloud as well as setting additional metadata on assets from Bug Crowd. Scroll down to see a full list of connector changes.
Complete list of changes and bug fixes…
NEW There is a new Synack connector for ingesting bug bounty vulnerabilities.
UPDATE You can now use a custom date selector to filter the discovered and last seen dates for vulnerabilities on the Active Vulnerabilities page.
UPDATE The Bug Crowd connector now sets additional metadata on assets.
UPDATE Now when assets are merged the merge is permanent by default. Secondary matching information is updated to include primary information (such as asset name or IP address) from non-primary assets automatically unless disabled during the merging of the assets.
UPDATE The Prisma Cloud connector has been updated to ingest at a much faster rate with more input from users on what specifically to import.
UPDATE The vulnerability details excel report has been updated to include the Asset Owner field on the Scan Data tab as well as the vulnerability’s exploitability and user comments on each Severity tab.
UPDATE You can now identify vulnerabilities that already have comments from the Active Vulnerabilities page.
UPDATE Custom finding instances on device assets can now be edited to change the service or port after creation.
UPDATE Miscellaneous optimizations to improve the speed of automation rules and asset counting.
UPDATE Asset search on Asset Management page filtering now allows for special characters.
UPDATE Qualys WAS Scan ingestion now includes setting the HTTP request body if provided.
UPDATE You can now specify regions for vulnerability ingest rules for the AWS connector.
UPDATE Improvements to the speed of asset synchronization and vulnerability ingestion for the AWS connector.
UPDATE Ingestion of vulnerabilities from Rapid7 InsightVM and Nexpose now also set the vulnerability’s exploitability based on additional criteria from Rapid7.
UPDATE When ingesting OWASP Dependency Check scan files, an Informational finding for files with no vulnerable dependencies is no longer created.
UPDATE Extended support for additional columns in Alertlogic scan files.
UPDATE The Nucleus Custom Finding JSON file now supports setting exploitability as a boolean value in addition to a string.
BUG FIX Filtering for an unknown operating system in the Asset Management page now also includes operating systems that are set as Unknown.
BUG FIX The Assetnote connector now links to the correct support page.
BUG FIX In limited situations vulnerabilities ingested from Assetnote would not set the instance path.
BUG FIX Improvements to the way that dynamic fields are applied to asset groups in asset processing rules.
BUG FIX In limited situations the vulnerability description and recommendation for Sonatype NexusIQ vulnerabilities was not comprehensive.
BUG FIX In limited situations container images ingested from Prisma Cloud would have empty brackets appended to the container path.
BUG FIX The Sonatype NexusIQ connector no longer allows for importing of unsupported scan types.Read More
Let’s paint a picture. It’s a bright sunny day in Florida and the Nucleus Ninja is out walking his Ninja-dog, Ninken, next to the local alligator pond. This gives the Nucleus Ninja a chance to clear his head and think about our customers, and that’s when it hits him, the dawning realization that may just be the answer to “how to do automation in the context of vulnerability management”.
This release is all about how to start doing automation of vulnerability management at scale. We announced our push towards better automation in our first Quarterly Customer Webinar, and this release is all around going down that path. We’ve designed a brand new workflow, introduced a templating language, and updated a lot of areas to make Nucleus just that much better around automating workflows. And without further ado, here’s what we’ve been up to the past month…
Last release we announced Vulnerability Processing Rules, a new feature in the Nucleus Automation Engine that makes it easy and efficient to set due dates on vulnerabilities in line with the security policies in your organization. At the time, we promised more functionality coming soon to help you automate as much of the vulnerability analysis and tracking process as possible.
Today we’re excited to announce an extension to these rules using our new “ninja-approved” Action Card view. Now, in vulnerability processing rules you can not only trigger actions on more flexible data criteria, but you can also choose between a wider set of actions to perform on vulnerabilities when they are ingested into Nucleus. All actions can be filtered to only apply to subsets of assets for maximum flexibility and scalability in your automation ruleset.
Some of the new actions available include:
What’s even more exciting is that you can do all of the above actions in one rule. Simply create a new rule, choose the vulnerability and asset criteria, and add action cards to your heart’s content! We think that this can be particularly useful for actions that are specific to your organization, such as changing the vulnerability’s severity, assigning it to a user and adding an explainer comment all in one go, completely automatically as new vulnerability data comes into Nucleus.
here’s what it looks like to add a bunch of actions
We’ve also managed to sneak into this release another new action in Asset Processing Rules: Asset Owner. This adds to the existing set of actions available when setting asset processing rules as new asset data is ingested into the Nucleus Asset Inventory, which still includes setting asset groups and risk attributes. Stay tuned as we convert this workflow to the new action card view in the coming months to make it just that much easier to do the second hardest part of Vulnerability Management, which is knowing what assets you have.
This was the “Eureka” moment that allowed the Nucleus Ninja to come to the simple following conclusion
“If I could just use asset fields dynamically in automation rules, I could have one rule to… rule them all!”
Well, it’s time to get back to your computer and fire up your browser, because Nucleus has got you covered! Introducing Dynamic Fields, a templating language for the Nucleus Automation Engine (and soon to be app-wide, or world-wide, depending on how you look at it).
Dynamic Fields allow you to construct asset and/or vulnerability processing rules that dynamically include information from the assets that the automation rules match during execution of the rule. For example, let’s say you want to automatically assign a vulnerability to a user based on the asset owner. That is now totally possible! Say goodbye to multiple rules for every possible value of a custom metadata field. You can create ONE rule to undertake multiple actions and dynamic values based on other attributes from elsewhere in Nucleus.
In this release, you can use asset fields dynamically in vulnerability processing rules when commenting on a vulnerability or assigning a vulnerability to a user, and in asset processing rules when adding an asset to an asset group or setting the asset owner.
Here is a complete list of the asset fields that you can use dynamically in these automation rules:
We didn’t stop there! Since you can use custom metadata from your extended asset record in Nucleus, we’re now always continuing to update existing connectors to include more additional metadata so you can use the data for better automation and reporting. In this release we’ve made updates to the Sonatype NexusIQ and Prisma Cloud connectors.
As always, you can use all of the above metadata in the Nucleus Automation Engine to make more and more powerful automations in your Nucleus Projects.
At Nucleus, one of our guiding principles is to listen to our customers and build functionality that makes the vulnerability management lifecycle as quick and painless as possible so that more time can be spent on high-value tasks.
True to this principle, we’ve previously released two powerful connectors that integrate with Amazon Web Services (AWS), providing customers with vulnerability data identified by AWS Inspector, as well as synchronization with AWS EC2 instances so that you can keep on top of what your attack surface actually looks like. Since then we’ve been listening to your feedback on how you use these connectors so that we can make them even better than they already are.
Today we’re excited to announce a brand new connector called the Amazon Web Services connector which integrates our two previous connectors into a single authentication flow and will serve as the foundation for scaling out support for more AWS services in the future cough ECR is next cough cough. This connector is the latest step in our push towards making the ingestion of cloud asset and vulnerability data a quick and painless task.
With this release, the AWS connector becomes a single place for you to set up and manage access to all of your AWS accounts in your Nucleus project by leveraging cross-account IAM roles. IAM roles can be created directly in the AWS Console, or deployed using CloudFormation which we’ve provided a handy CloudFormation template for.
Once roles are deployed to your AWS accounts, you can then add role ARN’s directly in the connector setup page and Nucleus handles the rest! For more information on setting up the new AWS connector as well as the CloudFormation template, see our help documentation here.
You can now manage a single Asset Inventory Sync rule for all of your AWS accounts across all regions in your Nucleus project. Simply go to the Asset Inventory Sync tab of the Automation page and click Add Rule. Select your AWS connector, the regions, and accounts that you want to synchronize instances from, and hit Save & Finish.
The synchronization rule now also ingests all available metadata as Additional Metadata which can be viewed under the Asset Details page, as well as leveraged to construct powerful rules in the Nucleus Automation Engine:
The AWS Inspector integration has been turbocharged with new functionality and flexibility. You can now import vulnerability results by Scan, Target, or Template, as well as select the regions that you want to query and ingest data from:
Once you’ve chosen the import method and regions, you’re then presented with all available results across each account that you’ve set up in the connector, and can further filter by region and other information:
Please note that with this release we’ve deprecated the existing EC2 and Inspector connectors, as well as authentication via IAM access keys. They will be continued to be supported for existing customers during the transition period, but no new features will be released for the previous AWS EC2 or Inspector connectors.
We’ve been working hard behind the scenes to make the Nucleus Automation Engine even better and provide more flexibility and scenarios to trigger the automation workflows! In this release we’ve updated the Vulnerability Processing, Ticketing & Issue Tracking, and Notifications rules so that they can be triggered based on more vulnerability conditions, such as:
For more information on the complete list of new triggers, check out our help center.
A new year, a new release of Nucleus! We hope everyone had a great break and a happy new year and are as excited as we are to see what 2021 brings. It can’t get worse than 2020, right?
The first release of this year is packed full of goodies - it has something for everyone. We’re also trying a new format for our release notes. See below to find out more!
The asset management and asset details pages have had a face-lift, bringing with it specialised views for some of our asset types, and a clearer visibility of container instances and images:
If you’re ingesting container images with tag data or source code repositories with branch information, Nucleus now intelligently matches container images from the same repository and branches from the same application so that you can easily swap between them:
In addition to Additional Metadata being front and center in our new asset details page, we’ve started ingesting and populating this section of our assets with the scan/tool metadata from each available source. We’ve adopted a standard dot style naming convention so that you’re always aware of where the metadata came from:
Coupled with an update to our Asset Processing rules in the December release which allows you to trigger a rule based on the value in Additional Metadata, you can increasingly build more and more powerful automations for your Nucleus projects.
This release includes additional metadata from Checkmarx, Veracode, Rapid7 InsightVM and Microsoft Defender for Endpoint. We’ll also slowly be updating our other connectors over the coming months to include more metadata from them too.
This release we’re introducing support for Microsoft Defender for Endpoint (previously known as Microsoft Defender ATP). This connector has been one of the most requested integrations to date, so true to our word of building for our customers, we’ve built a connector that integrates with the Threat and Vulnerability Management module to ingest identified CVE’s in to Nucleus in an automated way. To find out more, check out the help article.
Shifting left means getting feedback as early as possible in the development lifecycle, and for many that means scanning code as soon as it’s branched. Now that Nucleus makes it easier to view the different branches of applications, we’ve updated two of our most used connectors to also be branch aware.
By using custom fields that are set up in these tools, you can now import the application name, branch, git repository URL and commit hash of a scan directly in to Nucleus. If you’re using Checkmarx, you can also optionally set a delimiter in the connector setup. This means that you can pull the branch name directly out of the project name.
We’ve also updated these connectors to give you even more flexibility with how you create asset groups. Now when assets are imported from these scanning tools, you have the option to create unique asset groups, create groups that match with imports from other apps, or to do nothing at all!
Vulnerability ingestion for Veracode has also improved as we’re now matching each vulnerabilities status to its corresponding counterpart in Nucleus.
Similar to Checkmarx and Veracode, we’ve also updated our Rapid7 InsightVM connector to give you more metadata and more choices when importing assets. Not only do you have the same asset group import options as Checkmarx and Veracode, but we are also ingesting all criticality, owner, location and custom tags as additional metadata that you can use when creating automation rules.
Sometimes even though we have the best of intentions, things just don’t go the way we planned, and we’re left to pick up the pieces and figure out what went wrong. In this release, we’ve made it easier to be notified and investigate when a connector ingestion job didn’t complete successfully.
In the newly renamed Data Ingest section (previously known as Scans), you can now view all connector activity for a specific project including a log of previous jobs and any upcoming jobs.
We’ve also made it possible to be notified when a scan ingestion fails. Navigating to Project Administration > Edit Project Info will allow you to set an email address for daily or weekly digest emails to be sent to when a scan fails. These emails will only be sent if a scan ingest fails!
This release comes with some optimizations that make Nucleus even faster than it already is. Page loads should be up to 2x faster across the application.