Scrutiny

Scrutiny - full specification
This page is for you if you're evaluating Scrutiny or comparing it with a competitor. Please bear in mind that Scrutiny's licence is a one-off purchase not an annual subscription.
- Interface:
- Organise your sites into folders.
- Autosave feature saves data for every scan, giving you easy access to results for any site you've previously scanned..
- Crawling engine:
- fast, efficient and native to MacOS (ie not Java, not an iOS app running under Catalyst) which makes for efficiency and security. If your server can cope, turn up the number of threads and see how fast.
- handles large sites without slowing down
- can store a huge amount of information. Tens of thousands of pages containing hundreds of thousands of links, drilling down as many levels as you like.
- many options for limiting the crawl - by number of levels or links, by blacklisting or whitelisting
- many options for tailoring the crawl - ignoring querystrings or not, ignoring trailing slashes or not, tolerance to coding errors such as mismatched quotes etc
- many options for doing other things while crawling; eg archiving, spell-checking
- search your site for pages containing specific text, or not containing specific text, a single term or multiple terms. Full code search or just the visible text. Now allows Regex expression.
- better protection when disc space is low, scan should stop before catastrophe happens, warn and give an option to pause or continue.
- scan websites which require authentication (signing-in / logging-in)
- can optionally render page, making it possible to scan sites that require clientside js rendering
- Link check:
- option to render before scanning pages
- option to scan pdf or docx documents to find links
- option to scan a site locally. This isn't as good as scanning via a server, because you're testing whether files exist rather than sending a http request and receiving a server response code.
- Start the scan from a list of links, an xml sitemap, a local pdf document or Word (.docx) document
- option to check for broken images
- status for each link clearly displayed (eg '200 no error')
- support for attempting to spot 'soft 404s' - this is where 200 is returned despite intended page not being found
- limit your crawl using blacklisting or whitelisting on url and even terms within the content
- colour highlighting
- filter results to show bad links only, internal links, external links, images and more
- use a context menu for options such as visit url, copy url, highlight link on page, re-check link, mark as fixed
- Check for insecure content and links to old http site (migration to https:// site):
- See a list of pages which contain links to the http:// version of the site
- See a list of pages which reference insecure / mixed content (images or other files)
- SEO audit:
- display SEO parameters such as url, title, description, main headings, noindex/nofollow
- keyword density alerts - see pages with any keyword(s) occurring in the content above a particular threshold ('stuffing'). Double click to see an analysis for that page, checking up to 4 word terms.
- keyword / phrase analysis - see the count for any word / phrase in url / title / description / content
- list pages with missing SEO parameters (title, description etc)
- list pages with possible duplicates (same content, different url)
- list pages with description too long / too short
- list pages with title too long
- list pages with too many links
- list pages with thin content
- list pages with mixed content (http:// resources within https:// page)
- list deep content (greater than X links from home page)
- Find images with no alt text
- list pages with redirect chain
- displays stats for each page such as word count, link count, content size, image count, image weight
- Spelling:
- checks for spelling issues on your pages as it scans
- Step through those one by one and see suggestions
- choose the language used by the spell checker on a per-site basis
- Reporting:
- Summary report visible at end of scan, containing stats about of bad links, SEO problems and spelling / grammar issues
- Customizable, branded summary report can be generated after a scheduled or adhoc scan
- Full report contains summary report plus csv's for the main tables
- Piechart (for links) and radar chart (for SEO) included in the summary report.
- Build your own custom reports using exported csv files and external tools
- Scrutiny's data is optionally autosaved or manually saved and reloaded - carry on working on broken links or other issues another day without re-scanning
- These reports can be web-enabled. With the option switched on in Preferences, simply transfer the files to a web server, anyone with a link can view / download. This allows you to provide reports to anyone within an organisation, or (with customisation of logo/header) provide services to your customers.
- HTML validation of the entire site.
- Site-wide html validation. A number of tests are made to the html of each page as they are scanned, with a focus on well-formedness and issues that may have consequences.
- For a strict validation, a page can be passed to the w3c validator with a single click.
- No software can test your website and declare it fully ADA, or more specifically WCAG, compliant because some of the checks need to be made by a human or are subjective.
- However, there are certain very important things that automated testing does do very well. Here is a list of the checkpoints that Scrutiny can perform.
- A number of optional actions which can be taken when the scan completes (open a file/script, send email, save report, save sitemap).
- Scrutiny doesn't have to be running for the scheduled scan to take place.
Common tasks
The links below will give you more information and brief tutorial
- find and display my site's broken links
- locate a broken link
- limit my crawl using blacklisting / whitelisting
- export an xml sitemap
- use canonical href to exclude duplicates from my xml sitemap
- find missing meta tags
- find duplicate content (same content, different url)
- analyse my pages for occurrences of a chosen key word / phrase
- test the html validation of a page or all pages
- test a website which requires authentication
- run scrutiny on schedule
Doesn't do exactly what you'd like? Please let me know.