Scrutiny

Scrutiny application icon

< Back to Scrutiny home

 
download
v10.3.5

 

Version History

v10.3.5 April 2021

  • Some improvements relating to images:
    • Image urls within <picture><source srcset=.... were being collected and checked even when 'check images' was switched off.
    • As a side-effect of the above fix, html5 audio and video source urls are not collected when 'check images' is off and this is probably as expected.
    • For the same reason, <embed> tags aren't searched if 'check images' is off
    • Images with querystring after the file extension were not being recognised as images under certain circumstances (eg if they had a bad status, or if no mime type is included in the response header)
  • Some improvements to checking list of links / local files:
    • Enables files stored in certain locations outside the user directory.
    • Fixes problem with case sensitivity check when a file location involves a symlink
  • Handles certain trackback links, no longer reports them as bad links.

v10.3.3 March 2021

  • Now offers options under the 'Log in' button. Version 10.3.1 updated the webview used in the Log in window. This helped the functionality to work with some sites and broke the functionality for others. Version 10.3.3 offers a choice - try the legacy version first, if that doesn't work, try the other.

v10.3.2 March 2021

  • The Warnings table and the spelling table weren't clearing properly when the user switched to a different website and viewed autosaved data
  • Fixes problem where under unlikely circumstances, spurious character(s) find their way into an image or link url, causing bad link to be reported.
  • Important fix - link urls within html area map were incorrectly being marked as images, which could prevent the page from appearing in the sitemap if the image map is the first occurrence of that url that Scrutiny discovered.

v10.3.1 March 2021

  • Fix and enhancement to existing option 'trust invalid server certificate' (internal domain / subdomains only). Allows scanning of site while certificate is out of date or not yet installed properly. Links to external sites with invalid cert will still be correctly reported as bad links
  • Adds option to override 'down but not up' rule, which usually limits Scrutiny to the 'directory' that you start in. This allows you to have a deep url as your starting url (eg for authentication reasons) when you actually want to scan the whole domain.

v10.3.0 February 2021

  • Improves handling of linked files without file extension (eg may have "?format=json" in the url) and where there is a 'type' attribute in the html meta tag. In these cases, the type attribute takes priority over the server 'content-type' header field, which may well give a different content type because of the lack of file extension.
  • Where the 'type' attribute in the html meta tag and the server 'content-type' header field don't agree, the html type attribute takes priority.
  • "Type mismatch: Type attribute in html is xxx/yyy, content-type given in server response is aaa/bbb" added to list of possible warnings.
  • Now does not try to parse json files found in meta data as if they're html (but does check them as it did before)
  • Where there are blacklist / whitelist rules, these are now printed at the top of the exported summary report, so that the scope of the scan is clear.
  • Other small enhancements

v10.2.1 February 2021

  • Fixes problem with 'flag blacklisted' option. Blacklisted url (ie 'do not check links containing...') were not being flagged properly. The option is now renamed "Treat blacklisted urls as bad links" and those urls now show up when filtering 'bad links only', with that option switched on.

v10.2.0 February 2021

  • Adds warning: 'link url has mismatched or missing end quotes'
  • fixes links flat view not reloading when search box cleared
  • fixes all-links.csv being blank when generated automatically at the end of the scan, and when Preferences set to flat rather than 'by link'.
  • fixes garbage filename when automatically generating full report on finish. tbh the intended unique filename isn't much better

v10.0.5 January 2021

  • A number of minor improvements and fixes to the html validation / reporting.
  • Fixes 'export warnings as html' which was exporting the wrong table.

v10.0.4 January 2021

  • Fixes a problem with automated generation of the new Warnings table as part of the full report

v10.0.3 (now the general release) January 2021

  • Adds 'Badly nested <form> and <div>', 'Form element can't be nested' and 'unclosed form element' to list of possible html validation warnings.
  • version 10 now considered stable and replaces 9.14.3 as the full main release.

v10.0.2 (beta) January 2021

  • Fixes warning "page has image without alt text, image url is..." giving the link url rather than the image src if the image in question is wrapped by a link
  • Adds context menu to Warnings table containing Copy URL (the page or link url that the warning relates to), Visit (ditto) and Show Inspector (page or link inspector as appropriate).

v10.0.1 (beta) January 2021

  • Fixes possible crash with certain options switched on

v10.0.0 (beta) January 2021

  • html validation of all pages during scan. A list of warnings (mostly html validation but also accessibility and other warnings) is available from the Results selection screen. It's sortable, exportable and filterable (html validation / links / server errors).

Other new features:

  • Adds option to generate an image sitemap.
  • (It's possible to include images in your regular sitemap.xml, and this may be preferable. But if you want to generate a separate image xml then you can now exclude them from the main sitemap and generate a separate imagesitemap.xml. Because of extra processing while scanning, this is an option which is off by default and must be enabled before the scan in Preferences > Sitemap. Like the main sitemap.xml, it is automatically broken into multiple files if it reaches a maximum size or number of urls. It always includes each image url only once, which is an option with the main sitemap.)
  • Adds a summary of warnings, with the 3 most common, to the summary report. All warnings are saved as csv along with the full report if the relevant checkbox is checked in the site's settings.

Other fixes:

  • if "only include contents of paragraph and heading tags" is switched on (applies to spellcheck, searching, word count and soft 404 search) then p tags weren't being correctly found if they were plain <p> (ie closing angle bracket immediately after the p). Now fixed.
  • Warnings table opens Page inspector if it's about a page, or the Link inspector if it concerns a link.
  • 'flag blacklisted' option fixed, and now creates warning if on.
  • Improves spell-checking, filtering is improved to take out filenames and some other non-text that can legitimately appear in the page content.
  • Makes the 'highlight' feature within the link inspector a little more robust.
  • When needing to identify a file type, the mime type (returned in the 'content-type' header field) if it exists, takes precedent over the file extension. There are rare scenarios where this makes a difference.

Full list of possible html validation warnings (so far):

unclosed div, p
extra closing div, p
extra closing a
p within h1/h2...h6
h1/h2...h6 within p
more than one doctype / body
no doctype / html / body /
no closing body / html
unterminated / nested link tag
script tag left unclosed
comment left unclosed
end p with open span
block level element XXX cannot be within inline element XXX (currently limited to div/footer/header/nav/p within a/script/span but will be expanded to recognise more elements )
'=' within unquoted src or href url
image without alt text. (This is an accessibility, html validation and SEO issue. The full list of images without alt text can also be found in Scrutiny's SEO results.)
more than one canonical

Warnings that are not html validation:

The server has returned 429 and asked us to retry after a delay of x seconds (a number of these indicates that you need to rate-limit your scan )
(if 'check anchors' is switched on) a link contains an anchor which hasn't been found on the target page
The page's canonical url is disallowed by robots.txt
link url is disallowed by robots.txt
The link url is a relative link with too many '../' which technically takes the url above the root domain.
(if 'flag blacklisted' option switched on) The link url is blacklisted by a blacklist / whitelist rule. (default is off) With this option on, the link is coloured red in the link views, even if warnings are totally disabled.

v9.14.3 December 2020

  • Finds image url in <meta property="url" content="xxxx">, when either lazyload or 'look in meta tags' is switched on
  • As a policy, now reports but doesn't test certain special urls such as xmlrpc.php and about:blank. They won't appear in Warnings, but will be listed as "not checked" so that the webmaster can see that they exist on the page. Checking these urls isn't helpful (some will always return a good status, some will always return a bad one) but being aware of their presence may be helpful. They may exist for perfectly legitimate reasons such as part of a lazyload system or pingback system.

v9.14.2 December 2020

  • Improvements to warnings functionality:
    • Improves wording and info within warnings table.
    • Warns about certain specific elements that shouldn't be within link (iframe, form, embed, source etc)
    • Fixes a duplication issue
  • Improvements to spell checking functionality. More aggressive filtering of filenames and urls which may legitimately appear in the page text if they're in link text.

v9.14.1 December 2020

  • Updates the selectable user-agent strings and adds more (in particular, Edge and some more mobile browsers)
  • Fixes problem where spell-checker incorrectly attempts to spell-check an m4r file (if the server doesn't return a mime type in its header fields) and puts garbage in the spell check results
  • Small fix for new warnings functionality. The problem was causing some warnings not to be listed

v9.14.0 December 2020

  • Improves warnings reporting:
    • Adds a 'Warnings' table. Rather than having to find warnings via the links results, by opening the link's inspector for each orange link, Warnings now has its own entry in the Results selection. This shows a sortable and exportable table listing urls/warnings.
    • Tidies up the Results selection screen.
  • After progressing past the 'Scanning' item in the breadcrumb trail, 'Scanning' no long disappears but alters to read 'Scan again' and acts as a shortcut to re-scan the current site.
  • Improvements in the saving/reloading of data, should be a tiny bit faster
  • Fixes a 'mixed content' false positive where 'check images' is off and a link wraps an image with alt text and the link goes to an external http:// url.

v9.13.3 November 2020

  • Now recognises and warns about unterminated or nested link tags, which are illegal in html. Previously if this problem existed on a page, it could cause some spurious minor symptoms such as a link url being incorrectly reported as an image, or incorrect warning about mixed content on the page.
  • Updates the Paddle licensing framework to the latest version which is Big Sur and M1 compatible

v9.13.2 October 2020

  • Fix: When sitemap was exported to csv, double-quotes in data (eg heading) weren't being escaped properly and would break the csv
  • Adds preference for quotes in data when anything is exported to csv. There are at least a couple of ways to do this and neither works universally. Scrutiny's default way (replacing the double-quotes with single) isn't ideal but it should work whatever is used to open the csv. Integrity offers the choice of these three methods.

v9.13.1 October 2020

a number of small fixes and enhancements including :
  • Links results, filter button, the option "http: links" is now correctly shown or hidden, and works if it is visible.
  • Results selection - sitemap - the availability message was showing an incorrect string and was confusing.
  • When using search box or filter button above the link results, operation is a little quicker

v9.13.0 October 2020

  • Adds 'warnings' column to 'by link' and 'flat' links views, which can be included when either of those views are exported as csv.
  • Fixes problem where sitemap > show excluded > context menu > Copy / Visit sometimes used the wrong url.
  • Adds 'broken anchor links' to filter button above links tables.
  • Some fixes to anchor functionality

v9.12.2 October 2020

  • When starting url returned a page with no links, crawl was stalling. Scrutiny was returning a status for the starting url but not checking images or linked files. The page wouldn't be listed in the SEO table and it wasn't possible to generate a sitemap.
  • There may be a good reason for the starting page to have no links (under construction or business card).
  • The 'unexpectedly few results' warning and offer of diagnostic window / support still appears (which is appropriate if the crawl only finds a single page and few or no links) but if this is batted away, the images and linked files will now have been checked, and it will be possible to generate an xml sitemap or view / export the SEO information for that page.
  • Now searches for window.location urls in the page's head, includes those urls in the crawl if they exist. (This can catch the screen.width redirection to a mobile site. Blacklist the mobile url if you don't want this to happen or to scan the sites separately).
  • In recent versions, with larger scans, Scrutiny may have been very slow to respond or even appeared to hang, with certain actions such as reloading data or switching to 'bad links only'. This version should be much better.

v9.12.1 September 2020

  • Fixes possible problem if at first launch, user cancels the offered site creation sheet and then tries to open a list of links
  • Fixes problem with committing a schedule for the first time, if user's LaunchAgents folder doesn't already exist
  • Package (.dmg) now includes an uninstaller

v9.12.0 September 2020

Improvements concerning robots.txt and warnings:
  • small fix with parsing the robots.txt file
  • always parses robots.txt if present and for each url, notes whether the url is allowed or disallowed. If disallowed, a note is made in the url's warnings (warnings are highlighted in orange in the links tables, the actual warnings can be seen in the link inspector)
  • whether a link is still checked and followed or not is down to the 'limit crawl based on robots.txt' setting. Whether disallowed pages are included in the sitemap is decided by Preferences>Sitemap>Observe robots.txt
  • adds 'Disallowed by robots.txt' choice in filter button in SEO
  • removes some unnecessary operations so that switching between tabs in the link results is quicker
  • fixes problem with 'warnings' filter option in links 'by status' view

v9.11.0 September 2020

  • Some fixes to the Monitoring functionality including
  • Fixes possible crash when using the Test button
  • Fixes bug preventing log file directory from being selected

v9.10.0 September 2020

  • Correctly ignores sms: links
  • Fixes possible crash if empty url() encountered in inline style or style sheet
  • For users of 10.15 upwards, implements a new 'ignore local and remote cache' policy which may fix caching issues on Catalina upwards
  • advanced feature - able to send form data with initial request, without authentication checkbox being switched on or username/password filled in
  • advanced feature - authentication login window now correctly uses same user-agent string set in preferences, now offers option for js on / off in login window (doesn't necessarily have to match 'run javascript' setting for scan).

v9.9.1 August 2020

  • Improvements to image discovery; now finds and processes image urls within inline styles (if 'check images' is switched on)
  • Improvements to engine. removes possibly duplication of link occurrences under certain circumstances. A few other small changes. The results from this new version may be slightly different but should be more accurate.
  • Fixes image urls found within style sheet being reported even if 'check images' was turned off.

v9.9.0 August 2020

  • Adds support for checking for links within Word (.docx) documents
  • A new checkbox in the settings: 'check links within docx files' is now alongside the existing 'check links within pdf files'
  • Adds support for checking all the links within a *local* pdf or docx document:
    • If File > Open is used and pdf or docx chosen, the file:// url will be passed to the 'new setttings' dialog (alternatively, use File > New and drag the file into the 'starting url' field, clearing the field and putting that field in focus first, or type the file:// url)
    • If a new config is initiated using the above methods, and the starting url is a local pdf or docx, some necessary settings will be set ('this page only', and 'check links within pdf/docx')
    • (Note, Scrutiny already handled a local list of links in txt or csv format, using the above method to add it.)
  • When testing linked files, now automatically ignores the wordpress rest api files which return an unauthorised status when tested, leading to unnecessary concern.

v9.8.4 July 2020

  • Adds support for charset=GBK, charset=koi8-r, charset=euc-kr and some other Latin and non-Latin character encodings.

v9.8.3 July 2020

  • Some improvements around starting your scan with a list of links. In particular, automatically differentiating between txt and csv file types (this fixes a bug where a url containing a comma within a txt file would be incorrectly split).

v9.8.2 July 2020

  • Fixes a couple of situations that could result in incorrectly-constructed link urls and therefore false positives
  • Better handling of escaped forward slashes in urls

v9.8.1 June 2020

  • Very small fix to prevent some false positives arising from SVG masks in style sheets

v9.8.0 June 2020

A number of fixes and enhancements related to the padlock in browser address bars. Different browsers have different criteria for displaying the padlock for a secure site.

The insecure content report will now include:

  • insecure urls found in certain meta tags, such as open graph or Twitter cards.
  • insecure images, whether hosted externally or not
  • insecure form action urls, even if the 'check form action' is switched off.

Here's the full list of changes:

  • Adds option to search certain meta tags for urls. Those urls will be link-checked and also checked to see whether they count as insecure / mixed content. The meta tags in question are meta name=, meta itemprop= and meta property=. This includes social media tags such as meta property=og:image
  • Now correctly checks externally-hosted images to see whether they count as insecure / mixed content.
  • Fixes some image urls that appear on css not being reported when in single page mode.
  • Tidies up image alt text appearing as "[alt:][alt:]" (ie the alt text indicator duplicated)
  • Form action urls are now always collected and reported so that they can be checked to see whether they count as insecure / mixed content. They will be link-checked or not depending on the 'Check form actions' option.

v9.7.2 May 2020

  • Fixes bad links being reported incorrectly, where the url exists on a style sheet and is in the form: behaviour: url('#default#') or behaviour: url('#objID')
  • These are not urls but directives, and not a url that needs testing. Now handled correctly.
  • Intercepts the Quit command and prevents it if a scan is running in a window or tab. This prevents loss of data if Quit is called by mistake while the scan is still running, or if the system tries to shut down for any reason.

v9.7.1 May 2020

  • Fixes Metadata, Headings etc not being visible in Robotize in dark mode
  • Efficiency when generating sitemap.xml
  • Fixes 'Insecure content' option in SEO filter
  • Adds the insecure pages / mixed content csv (if there are any, and if 'individual SEO tests of concern' is checked in the site's options) to the list of CSV files exported with the full report

v9.7.0 May 2020

  • Adds 'Manage custom dictionary' button above spell-check table. This tool provides an easy way to see your list of 'learned' words (to check that you haven't 'learned' any misspelled words)and 'unlearn' any that you learned by mistake

v9.6.9 May 2020

  • Fixes spurious javascript being incorrectly reported as a link in certain situations
  • Fixes relative links being constructed incorrectly where anchor link feature was switched on and page being parsed is a directory url
  • In the warnings tab of the link inspector, if there was a warning about a redirection, it may have contained the final url twice instead of the original url and final url.

v9.6.8 May 2020

Enhancements to scheduling:
  • Adds 'Daily' option.
  • Adds 'Prevent tab proliferation' preference which is on by default. When a schedule starts, any existing tabs or windows with that same website config selected will be closed.

v9.6.7 May 2020

  • Important fix for users using the scheduling feature. Fixes a bug in the last few versions that could change some settings back to defaults when the schedule runs

v9.6.6 May 2020

  • Correctly makes sure that pages are excluded from the sitemap if the url didn't return a good status
  • Adds an 'export / import all websites' for people who are starting afresh with another computer or making a clean install and not migrating their user account
  • Fixes possible crash during scan
  • Other small fixes and enhancements

v9.6.5 May 2020

  • Important release for all users. Eliminates some spurious 'bad links' by correctly ignoring <link rel = dns-prefetch / preconnect ... > which often doesn't contain a full resource url and can return a bad or unexpected status when tested.

v9.6.4 April 2020

  • Further changes to XML sitemap generation routine. Earlier efficiency improvements (9.6.1) were at the expense of memory use, caused some problems generating sitemaps for very large sites.

v9.6.3 April 2020

  • Fixes problem with the 'send email' finish action for users on 10.14 or 10.15. When that feature is used, users should now see (just once) a dialog asking for permission for Scrutiny to control the Mail app. After that, the necessary permission can be controlled in System Preferences > Security & Privacy > Privacy > Automation (checkbox 'Mail' below Scrutiny 9).
  • Small improvements to the 'web access' for the full report. File size is displayed for each file, and for larger files which the displaycsv.php file won't handle, the 'display' option is hidden leaving only the option to download.
  • Minimum system requirement increased to MacOS 10.10 (Yosemite). Users of 10.9 should use version 9.6.2
  • Some changes to licensing functionality; a fairly major update to the Paddle licensing framework and Integrity's program flow at startup, but should be invisible to the user.

v9.6.2 April 2020

  • Fixes bug which could prevent the xml sitemap save dialog from appearing for new users

v9.6.1 April 2020

  • Improvements to exporting functionality:
    • efficiencies (memory / speed) which will benefit users with larger sites
    • the odd bug fix which will cure a possible hang when exporting, particularly the 'images without alt text' table

v9.6.0 March 2020

  • Improves the sitemap visualisation functionality:
    • redesigned 'bubble tree' theme which now looks much more professional
    • adds the concept of 'link juice' - nodes and connections can be shaded or coloured according to that
    • adds some buttons to Preferences > Sitemap which offer choices of colouring / shading for nodes and connections
    • fixes issue with 'list' theme, causing labels to disappear
    • many other small improvements

v9.5.8 March 2020

  • Fixes a bug with the sitemap visualisation / .dot export, which would randomly work or not work if you hadn't just run a fresh scan but were viewing saved data
  • Some real improvements to the visualisation functionality to follow shortly.

v9.5.7 March 2020

  • Adds sortable columns to links views and link inspector for rel = sponsored and rel = ugc. These columns are hidden by default but can be shown using the 'columns' selector above each of those views.
  • Fixes two specific fields of a link instance (target and hreflang) being displayed incorrectly after saving and reloading data

v9.5.6 March 2020

  • Improvements to the styling of the summary reports (html / pdf):
  • Improvements to the (previously present but undocumented feature) 'include web access'
  • 'include web access' feature now documented and supported

v9.5.5 March 2020

  • With the new 'check anchors' switched on, urls with #anchor fragments were sometimes incorrectly appearing in the Sitemap and SEO tables.
  • Fixes urls being duplicated in Sitemap table under certain circumstances and settings.

v9.5.4 February 2020

  • Fixes a problem with the sitemap visualiser

v9.5.2 February 2020

  • Small but important fix, all-links.csv wasn't properly escaping quotes in link text

v9.5.1 February 2020

  • Adds ability to test anchors. You can switch the option on using a new checkbox on Integrity's first tab.
    • this will cause urls like /index.html#top and /index.html#bottom to be reported as separate links (resulting in more data) and tested separately. (more cpu and time for crawl)
    • If a link url has a #fragment then Integrity will report the server response code as before (coloured red if status is bad). The anchor has no bearing on this. However, if the status is good, then Integrity makes a further check to see whether a name or id can be found on the target page matching the link fragment. If not, this is added to the link's warnings, and the link will be marked orange
    • You can view the details of the warning in the Link Inspector
    • Note that the anchor check is case-sensitive. Officially anchors are case-sensitive. Some browsers may treat anchors as case-insensitive, but this doesn't mean that all browsers will and it doesn't mean that it's right.
    • Note that you can't 'ignore querystrings' and also test the anchors, since the anchor fragment comes after the querystring.
    • The filter button contains a new item 'Warnings' which shows only links with warnings, this will include links with anchors where the anchor (a name or an id) can't be found on the page
    • As far as the filter button is concerned, 'Warnings' doesn't include redirects, even though they're both coloured orange in the interface and the Link Inspector Warnings tab does include warnings. The Filter button allows you to separate them
    • The filter button option 'Redirects' will still show redirects, even if you've chosen 'do not report redirects' in Preferences.
    • Typing a '#' into the search field will show links which contain a #fragment.
    • Warnings (which have been reported in the link inspector since v9.0) now cause the link to be coloured orange in the views. As some people like to work towards a clean set of results and may not consider the warnings important, the colouring of warnings can be switched off in Preferences > Links > Warnings. The 'Warnings' filter will still work when colouring of warnings is switched off in Preferences.
  • Adds option for all-links.csv (optionally saved automatically at the end of the scan) to be based on the links flat view rather than the collapsed 'by link' view. Using this option may result in a very large file for larger sites but it is a more comprehensive csv than the default option.
  • 9.5.1 also fixes:
    • garbage urls caused by a url containing a comma, or a data: image within an srcset.
    • garbage urls caused by certain javascript code.
    • fixes bug that's unlikely to have been noticed. If a url redirects and the redirect url has a # fragment, traditionally the rule is that those fragments are just trimmed. But they weren't being trimmed for redirect urls. That is now fixed, but of course the new preference to not ignore anchors is respected.
    • Fixes warnings not being saved / reloaded after application is closed, reopened and 'show data' button used.
    • Fixes insecure warnings not visible in table if user clicks 'no' when prompted to view insecure content at the end of the scan but later.

v9.4.4 February 2020

  • Fixes bug that could cause garbage image urls to be reported if image checking is on and a srcset attribute contains a url containing a comma (unsafe but not illegal) or data: image.

v9.4.3 January 2020

  • Fixes bug that could cause scan to stall at the starting url if the starting url redirects and if page rendering is switched on.
  • 'Detailed analysis of starting url' window:
    • Fixes bug causing diagnostics window to not show if it has been displayed once already and closed by user
    • Improves reporting of response fields, now shows each set of response fields when a url is redirected, rather than just the final one

v9.4.2 January 2020

  • Small change to the way the 'rules' work. They are no longer applied to the starting url. The previous behaviour is unlikely to have caused a problem in many cases, and it has worked like that for many years. The new behaviour is likely to be helpful in some cases and unlikely to be unhelpful to anyone.

v9.4.1 January 2020

  • Irons out problem causing links to be marked external if the case of the domain of a link doesn't match the starting domain. ie start at foo.com, a link to FOO.com would be incorrectly marked as external.
  • Fixes line number column of 'appears on' table within link inspector window
  • Small fix - unquoted link hrefs with no character before the closing bracket weren't being logged properly, leading to some spurious results.

v9.4 January 2020

  • Updates the page rendering functionality, should make this option more efficient and reliable.
  • Improves the exported 'insecure content' report - is now 'flattened' to make it easier to use.

v9.3.6 November 2019

  • If a meta-http-refresh-type redirect redirects from an internal url to an external one, then the link was being left marked as an 'internal' link. It's arguable whether this type of link (which redirects from internal url to external) is an internal or external link, but it's important for certain internal processes that it's marked as external when the redirection occurs. This was happening properly for the more usual types of redirect.
  • In v9.3.5 (the 9.3.5 point release only) if the above happened when javascript rendering was switched on then this could cause the scan to stick at the point when it appeared to have finished.

v9.3.5 November 2019

  • Minor interface fix. Search panel could be resized in such a way that the search terms text field disappeared.

v9.3.4 November 2019

  • Better handling of situation where image urls are being checked and an image with alt text is within a regular a href link which also has some link text appearing after the image and within the link. The link is now correctly reported with the link text and the image url is correctly reported with its alt text
  • Fixes a bug causing certain links in the above situation to be missed (ie where there is an image beside the link text within a link) and where the new 'lazy load' feature is switched on

v9.3.3 October 2019

  • When a site is copied, ftp server details are not carried across. It may be that you want aa clone of the settings for the same site, in which case you'll now have to enter them again, but it's more likely that you're copying settings but for a different site in which case it's really undesirable to keep the server details.
  • When ftp details dialog is triggered from the settings > Finish Actions > ftp sitemap and settings added or edited and OK'd, the field on the Finish Actions tab is updated accordingly.

v9.3.2 October 2019

  • Small improvement to 'lazy loaded' image finder. Now finds video and audio urls in the source tag / data-src element

v9.3.1 October 2019

  • Adds Expand All and Collapse All to the View menu with keyboard shortcuts, they work on all expandable tables, whichever is the current view

v9.3.0 October 2019

  • The main tables now retain their selection when sorted, as expected
  • 'Support' button added to diagnostics window which shows if unexpectedly few results are found
  • Grammar count is capped because a very long page with a large grammar count can make the crawl appear to have hung

v9.2.2 September 2019

  • Moves the "When scanning a secure (https://) site:" settings (which help with migration to a secure site) from Preferences to Settings and Options (Options tab) so that they can be set for each site

v9.2.1 September 2019

  • Fixes problem sorting the page weight table by file size compressed / uncompressed
  • If 492 codes are encountered (too many requests) more information is given in the Link Inspector's Warnings tab. A 429 may come with a 'retry after' which Scrutiny honours. It may also provide some information in the html of the page which follows the 429 code. All of this information is sent to that link's warnings for the user to see.

v9.2.0 September 2019

  • Fixes a bug causing bad links to be reported incorrectly when the link contains a fragment (#something) as well as non-ascii characters in the link
  • If a mobile user-agent string for a mobile browser is being used, some sites generate an 'intent://' url. Scuritny no longer reports 'unsupported url' for such links.

v9.1.1 September 2019

  • Improvement to 'lazy loaded' image functionality. Adds Blocs to the supported systems.
  • Adds .webp to the list of recognised image extensions (used in various places within Scrutiny)

v9.1.0 August 2019

  • Adds option to look for 'lazy loaded' image urls. There are various ways to implement lazy loading but Scrutiny should find them in the case of the most common implementations.
  • If a meta http refresh is within comments (including <!--[if lte IE 9]> ... <![endif]-->) then it's now correctly ignored.

v9.0.13 August 2019

  • Adds user-configurable js rendering time. Obviously applies to the 'render js' setting which shouldn't be needed by most users.

v9.0.12 August 2019

  • Fixes small bug that was preventing the app from running on Catalina

v9.0.11 August 2019

  • Adds 'line number' to link instances (the line number of the link within the html file) - there's now a column to show this number in the 'by link' view (when urls are expanded), by status, links flat view and the table within the link inspector.
  • Fixes bug that was causing broken images to not be shown in links view when Filter button was set to Images. The same bug may have had other symptoms too relating to broken images.
  • Fixes possible problem of some repetition in the 'columns' selector of certain tables.

v9.0.10 August 2019

A number of improvements to the Full Report:
  • Fixes a problem with the 'pages with mixed content' count (which appears in the short summary above the SEO table and the summary report). The number was appearing mysteriously without a label, and the statistic wasn't surviving a quit of the Scrutiny app
  • Number of 'images with no alt text' now appears in the SEO summary in the summary report.
  • Improves format of date & time stamp

v9.0.9 July 2019

  • Fixes a problem with the word count (which appears in the SEO table and the page inspector)

v9.0.8 July 2019

  • Fixes bug with archiving's 'don't show dialog each time' and 'browsable format' settings.
  • Adds 'Recheck parent pages of selected urls' to 'by status' view.
  • Fixes some problems that were causing miscounting when selecting multiple items and using 'recheck parent pages' option.
  • Adds an efficiency when selecting multiple urls and checking parents pages.

v9.0.7 July 2019

  • Improves links views' context menus. Specifically by adding more re-checking page and re-checking url options in various context menus
  • Fixes a bug causing the bad link count to decrease after a link is rechecked and still found to be bad
  • Fixes some possible spurious bad links with href= in them.

v9.0.6 July 2019

  • Patches problem that could cause bad link csv to not appear within full report. possibly random other problems when exporting flat links view as csv
  • Fixes problem causing missing orange (warnings) segment of the links pie chart (in the Links results within the app as well as on the full report) after data has been reloaded using 'show data' or a manual save and load data

v9.0.5 June 2019

  • Updates the manual for v9 (manual is contained within the app)
  • Improves efficiency a little when using javascript rendering. This setting should not be used unless content is not visible without clientside rendering.
  • Improves the logic of the 'Target Page' tab of the link inspector. The appropriate fields / buttons / warnings will be shown depending on whether the link's target is html, external or a bad link.
  • A couple of corrections in the Preferences window

v9.0.4 (no longer beta) June 2019

  • Adds option to context menu of 'Links by status' view. When selecting multiple and calling context menu, there are now options to 'recheck selected link urls' or 'recheck parent pages of selected links'. The latter is a v9 feature and is already in a number of single-selection context menus. It's a more comprehensive check and is useful if the link has been 'fixed' by being removed or its target url has changed.

v9.0.3 (version 9 still beta) June 2019

  • Adds 'h1 count' column to the SEO table, which is sortable, making it easy to identify pages which have no h1, more than one h1 or many h1s.
  • Adds 'pages with no h1' and 'pages with multiple h1' to the SEO filter button, to select / export the details of those pages.
  • The count of those pages also appears in the short summary on the SEO tab and in the generated report
  • Adds option to 'flatten' spelling 'by word' view, putting the reported word into every row (The option is in Preferences > Spelling > Exporting)
  • Fixes bug causing problems and possible crash when exporting as html from certain views.

v9.0.2 (version 9 still beta) June 2019

  • Improvements to sitemap ftp:
    • Any details you enter or edit within the ftp dialog during export/ftp are now correctly saved for next time
    • Improvements to sftp functionality which may have failed (despite success message) when 'don't save locally, just ftp' was used
  • Fixes some issues with renaming folders in websites folder list, dragging websites from one folder to another and dragging and dropping folders to reorder.

v9.0.1 (first v9 public beta) May 2019

  • Redesigned lInk inspector
    • puts redirects on a separate tab rather than a pop-up window
    • adds warnings tab, contains details of anything that gives this link an orange 'warning' status
    • traditionally the orange 'warning' status meant redirect(s) but now can include a number of other things
    • adds 'target page' tab, which shows certain target page properties and a button to access Page inspector
  • Page inspector
    • adds sortable tables of inbound links and outbound links
    • adds download time and mime type to page inspector
  • Adds detection of unclosed comment tag and unclosed script tag, these things are included in 'Warnings'. In future the number of possible things that you can be warned about will grow. Adds Warnings into diagnostics window.
  • Change to the internal flow. Previously link urls were stored 'unencoded' and 're-encoded' for testing (unicode characters and reserved / unsafe ascii characters). This is fine 99.9% of the time but sometimes this can cause a problem when this unencode/re-encode cycle produces a url that doesn't exactly match the url as it originally appeared on the page, and the server doesn't respond to the changed version. This can cause Integrity/Scrutiny to report 404 for a link which 'works in a browser'.
  • Link text now searched when using search box and by page view
  • Much better control over what is included in the full report, and removal of a couple of vestigial options - 'on finish, save bad links' and on finish, save SEO'. If you need these, enable the full report and choose those options. Users who like to "check everything" were finding duplication of files, many save dialogs and duplication of effort for Scrutiny.
  • Adds threshold for redirect chain to Preferences > SEO and includes redirect chains in warnings.
  • Better handling of redirection from a http or https url to a tel:, mailto: etc. Does not create a warning but cancels the connection and sets the status to 'not checked'. The redirect details can be seen within the link inspector.
  • Adds SFTP as an option (to existing FTP and FTPS aka FTP with TLS) for uploading xml sitemap and orphaned pages check.

v8.4.1 released June 2019

  • Fixes some issues with renaming folders in websites folder list, dragging websites from one folder to another and dragging and dropping folders to reorder.

v8.4.0 released May 2019

  • when parsing .css files for background images urls, now properly ignores anything /* commented out like this */
  • patches bug which could have caused the odd link url to be missed or a spurious link url if certain unlikely code appears in the page
  • reduces some false positives by retrying urls once using GET if they fail the first time with certain errors under the more efficient (and default for external links) HEAD

v8.3.14 released May 2019

  • corrects default text / background colours in 'robotize' window and sitemap visualisation. In dark mode the default values weren't playing nicely

v8.3.13 released May 2019

  • Important fix - fixes bug which was causing urls to be reported bad where they were found as the src of certain tags (iFrame, Embed, Script) and were not quoted
  • Fixes some unexpected urls appearing in Link views when the search box is used

v8.3.11 / 8.3.12 released May 2019

  • Fixes possible hang at completion of scan if archive feature is switched on.

v8.3.10 released May 2019

  • Improvement to subdomain comparison, internal links with subdomains may have been considered external if the starting url had a non-www subdomain. (This all depends on the 'consider subdomains internal' option switched on)
  • Change to logic, previously the reporting of insecure internal links on an secure page depended on having the 'consider http links external' option switched on. Now it may be switched off and the alert to insecure pages will still be shown.

v8.3.9 released April 2019

  • Important fix to the 'Render page/run js' functionality (note that this option should always be *off* unless you're absolutely certain that the site's links aren't visible without javascript enabled. If this is the case then the site is 'inaccessible' and you should address this.) The bug could cause the scan to be incomplete or even just crawl the starting page only.
  • Some improvements to the tasks (Results selection) table

v8.3.8 released April 2019

  • Fixes summary report having blank links pie chart
  • Fixes summary report containing some incorrect SEO statistics (pages with duplicate titles / descriptions counted twice)

v8.3.7 released April 2019

  • Fixes fatal error if option to check linked files is switched on and if a css file doesn't answer UTF-8 encoding

v8.3.6 released April 2019

  • Fixes bug causing the crawl to not remain within the 'directory' it starts within. (since 8.3.3)

v8.3.5 released March 2019

  • Fixes problem of redirects being duplicated after autosave (or manual save) and reloading the Scrutiny data. ie status showing as "200 no error < 301 moved permanently < 301 moved permanently" rather than the correct "200 no error < 301 moved permanently"

v8.3.4 released March 2019

  • Adds context menu to table within link inspector. Contains Visit, Highlight, Locate (as per the buttons below, which work if you first select a page within the table)
  • Engine now correctly ignores 'data-' elements within link tags. This was leading to some spurious results

v8.3.3 released March 2019

  • Further improvements to soft '404 functionality'. If target of link returns plain text rather than formatted html, Integrity now handles this. If the target page is formatted html and has a title, this is also now searched for the list of soft 404 terms.
  • Improvements to site search. Adds case sensitivity option.
  • Further small fix for a potential problem to pattern matching (as used in site search, blacklisting soft 404 etc)

v8.3.2 released March 2019

  • Fixes problem of 'soft 404' search returning 'near matches'. It now searches literally for the string(s) you enter.
  • Ditto for site search, which may have also returned 'near matches' when using recent versions of the system. It also now performs 'exact match' searches.

v8.3.1 released March 2019

  • Adds disc space check before autosaving data (which can be a large amount of data)
  • Fixes a bug causing the crawl to stall under obscure circumstances (starting the scan at a deep url, where the deep url contains an asterisk character.)

v8.3.0 released February 2019

  • Improvements to saving sitemap xml:
    • better error handling and reporting
    • when large sitemap is broken into multiple files, these are saved into a new folder at the location that the user chooses
    • option added to prevent splitting of large XML file (There isn't a switch in the interface but must be set using the Terminal.)
  • Adds built-in error/debug console which can help us give support
  • canonical url (if pointing to a different page than the page it appears on) has always been collected and shown in the SEO table, now they are also shown (if they point to a different page) as a link instance in the links results tables.
  • Testing and reporting image urls within style sheets was listed in release notes for 8.2 but not fully implemented. Now fully working.
  • On systems 10.12+, new windows will open as tabs in a single scrutiny window. it's possible to drag these out if you want a tab to be a separate window, or 'Merge all windows' if you want separate windows to be tabs in a single window.

v8.2.4 released February 2019

  • Fixes a potential crash when exporting full report (and possibly the links flat view) under certain circumstances.

v8.2.3 released February 2019

  • Corrects odd behaviour when a canonical tag appears twice on a page. This situation is handled more gracefully.

v8.2.2

  • Minor improvements to 'check for updates' functionality.

v8.2.1 released January 2019

  • Fixes problem causing certain save/export and alert dialogs to not show up

v8.2.0 released January 2019

  • Able to pull image urls from css style sheets and check their status (if the 'check linked js and css files' option is switched on')
  • Fixes bug causing some code to appear in stripped plain text if tags have no whitespace between - this could cause spurious words to appear in the spellcheck
  • Important fix, a bug could cause crash during scan in certain circumstances (though not reported many times). This was also causing some inefficiency
  • Scrutiny is now Notarized by Apple (security checked and certified)

v8.1.22 released January 2019

  • Fixes bug causing the results selection > Insecure content to not display correct information sometimes after saving data (or if autosave is on) and re-loading the data.
  • Changes some defaults for SEO (these are editable by the user in Preferences > SEO, but these values are the default for new users), In line with current thinking, a long title is one that's over 60 characters, and a long description is one that's over 200 characters.
  • Fixes problem, data wasn't always being cleared properly from the 'insecure content' list (if any existed) when user switched between saved data from different sites

v8.1.21 released January 2019

  • Search box for link results is now a literal full match.
  • Subtle improvement to html parsing relating to comments
  • Better handling of SSI where the include happens within an html tag
  • Changes the method of saving the data (during autosave or when manually saving all scrutiny data for a site). Faster and takes less disk space. Any data that successfully saved and loaded previously will still do so
  • Some engine improvements re extracting canonical url

v8.1.20 released December 2018

  • Small fix that can prevent a loop in unlikely circumstances with certain options switched on - a 404 page containing a meta-http refresh.
  • Some updates to the French localization

v8.1.19 released November 2018

  • Improvement to subdomain handling. The subdomain option 'treat subdomains of starting url as internal' may have not worked as expected if the starting url had a subdomain already, including www. This option should now work as expected for starting urls that include www.
  • Fixes a bug with the sitemap csv export which in unlikely cases could cause some unexpected urls in the results (no problem with the xml or other formats)
  • Fixes a bug which caused problems when trying to re-open manually-saved Scrutiny data (ie a .scrutiny file containing data from a historical scan)

v8.1.18 released November 2018

  • 'fixes' link count in SEO table. It has always been a simple count of all urls appearing on the page in question. This now includes urls such as embedded audio/video, linked files and image urls (if you're including those things in the link check). The expectation for the 'Link count' column is that it gives the number of <a href links only. Now it does.

v8.1.17 (not generally released)

  • Adds a new tool - Schedule overview window (access via Tools menu or cmd-6) This allows you to view all schedules in one place, and properly unload and remove any you choose. There are various historical reasons why a launchagent may have remained after the website's config is gone, this allows you to clean them up. Or simply remove all schedules from one place if you are removing or moving Scrutiny from a computer.

v8.1.16 released November 2018

  • Fixes a couple of problems that could cause the scan to speed up above the limit set in Settings : Timeout and Delays
  • Change to that Limit Requests to X per minute' setting - it had originally been set to reject anything below 30. That's now reduced to 10 as some sites are getting more difficult to scan with various ways of detecting automated requests.

v8.1.15 released October 2018

  • improves iFrame support
  • Fixes problem with img alt text being truncated if it contains a single quote character
  • Fixes problem causing 'http links found within https site' dialog to be shown more than once at the end of the scan (and autosave performed more than once too, although that wouldn't have been visible)
  • Important fix for everyone. If a sitemap is provided publicly on the website *in xml format* then this could have prevented full crawling of the site, (due to deliberate rules about checking but not following urls when user wants to check urls within an xml sitemap)

v8.1.12 released August 2018

  • Fixes but that may have caused crash with certain urls

v8.1.10 released August 2018

  • Further work around the improvement to the meta http-equiv refresh handling

v8.1.9 released August 2018

  • Mojave dark-mode ready
  • If crawl is started at a https:// page and a canonical of a secure page is insecure (http) then this is included in the report of insecure / mixed content pages. Previously this situation could be identified in the links data but wasn't included in the 'insecure/ mixed content' alert at the end of the scan.
  • Fixes a bug which would have caused Scrutiny to stall at the first url (reporting that as a 200 but going no further) under an unlikely set of circumstances

v8.1.8 released August 2018

  • Different handling of a common issue: linkedIn urls returning a 999 code (even though the link may work in a browser). This is not a Scrutiny issue but common to all webcrawlers / testers. LI seems to detect the rapid requests and/or non-browser querystring and returns a non-standard 999 code. Scrutiny used to present this as a server error and count it as a bad link. Now it labels it as a warning, and does not count it as a bad link. This is because it is not necessarily a bad link, it just hasn't been possible to test it properly.
  • Fixes issue with meta http-refresh not being observed if the page contains content with links. (The content was being parsed for links, in favour of the redirection being observed.)
  • Fixes bug causing no data to show when Filter button on SEO table is set to 'Duplicate descriptions'
  • (NB this version of Scrutiny is built against the 10.14 APIs which are still officially beta. This version should run fine on all supported systems. NB 8.1.4 was the last version built with an SDK version < 10.14)

v8.1.7 released July 2018

  • Fixes a problem with the reports seen on 10.14 - piecharts appeared blank

v8.1.6 released July 2018

  • (officially beta due to the 10.14 APIs still being beta, which this version of Scrutiny is built against. This version should run fine on all supported systems)
  • Better handling of a recurring 'Refresh' header field which could have appeared to leave the scan hanging when almost 100% finished
  • Fixes a possible crash after exporting links to csv

v8.1.5 released July 2018

  • (officially beta due to the 10.14 APIs still being beta, which this version of Scrutiny is built against. This version should run fine on all supported systems)
  • Enables dark mode when using MacOs 10.14 Mojave (will respect the user's choice of dark or light mode in System Preferences)
  • Some fixes to keyword density (reporting keyword stuffed pages) functionality
  • Some improvements to the sorting and filtering which should prevent a short hang when using the 'bad links only' checkbox in the links results. There may still be a bit of a delay with some large sites and when the 'by status' tab is selected.
  • Other small fixes

v8.1.4 released July 2018

  • Fixes problem scanning a site locally and directory path contains a space or certain other characters.
  • Adds override for the built-in behaviour which excludes pages from the sitemap if they are marked robots noindex or have a canonical pointing to another page. These options are in Preferences > Sitemap, they should be on by default and should only be switched off in rare cases where it really is necessary, such as using the sitemap for a purpose other than submission to search engines (where you do want all internal pages in the file)
  • Updates links within the app and dmg (support, EULA etc) to new https equivalents

v8.1.3 released June 2018

  • Fixes problem copying page url in 'by page' view
  • Some fixes to 'recheck' functionality from context menus
  • Now correctly handles quotes and return characters within link text when exporting links as csv
  • Corrects the flat / hierarchical html sitemap export option. (was working the opposite way around to expected)

v8.1.2 released June 2018

  • Fix for http links being included in sitemap when 'consider http pages as external' is checked in Preferences.
  • If a link where the target is internal redirects to an external link, this link is now considered by Scrutiny to be external rather than internal. Previously being considered internal was causing such a link to be included in the sitemap despite 'consider http pages as external' being checked. If a link where the target is internal redirects to an external link, this link is now correctly included in the 'Insecure content' report as redirecting to insecure (http://) page Fix to Links/By Link table which was not remembering its column information (b 8.1.22) ditto for By Status view

v8.1.1 released June 2018

  • Fixes pages being excluded from the sitemap (reason given, canonical points elsewhere), under certain circumstances and with the 'ignore trailing slash' button unchecked (which is checked by default, should only be unchecked if really necessary).
  • "Use Unicode normalization form KC " is now off by default, it's proved less helpful to have it on than off.
  • Some fixes to 'mark as fixed' functionality (and re-saving the autosaved data after user makes such changes)
  • Fixes problem with sitemap rules
  • Fixes problem with 'update change frequencies' button
  • Tidies up the sitemap transfer, a 'success' message added if the sitemap is transferred by ftp after saving locally, as it wasn't previously clear that this had been performed.

v8.1.0 released May 2018

  • Adds support for <embed> tag (thus finding and testing audio and video urls within that tag)
  • Adds detection of audio and video mime types. The filter button in Integrity Plus and Pro allows you to see audio urls / video urls.
  • Adds the options to include video in the xml sitemap
  • Deals with problem where the autosave feature was taking too long, giving the impression of hanging at the end of the scan, or when clicking 'Show data'. The data files are smaller, and are saved and loaded more efficiently.
  • This improves the situation greatly, but very large sites (tens of thousands of links) may still appear to hang for a while at the end of the scan, or when clicking 'Show data'. This is being worked on, in the mean time, if you experience this, the workaround is to trust that the app is saving / loading the data and wait, or to switch off the autosave feature.
  • Fixes issue with including colour in the sitemap dot export
  • Fixes case where a set of circumstances could cause the results to be shown early (and error shown for first url) while scan actually continues.

Many improvements to the sitemap visualisation:

  • new theme: list
  • numbers added to some themes to show links in and links out
  • many general appearance improvements

v8.0.14 released May 2018

  • Fixes manual sitemap ftp - after generating the sitemap and showing the ftp dialog, the transfer wasn't being performed.
  • Reinstates the import v5/v6 website configs, but as an option rather than being performed automatically on first startup as v7 did. (Find it under File > Websites from earlier Scrutiny version)

v8.0.13 released May 2018

  • Fixes broken site sorting. Your list of sites (whether viewing by folder or a single list) is now correctly sorted by name by default, and sortable by name, url or last checked.

v8.0.12 released May 2018

  • Repair to 'ignore
    and
  • Some fixes to exporting of links as csv or html, fixes possible crash when exporting

v8.0.11 released May 2018

  • Fixes problem with exporting Sitemap table as csv
  • Adds columns to SEO > Meta data table for <meta itemprop=name, <meta itemprop=description, <meta itemprop=image
  • Fixes issue which could have caused spurious data to appear in some of the meta data columns

v8.0.10 released April 2018

  • Adds option to export SEO summary headlines as csv. (Helps create custom reports using Google Data Studio or other reporting tool )
  • The summary is also included as a csv in the 'full report'
  • Fixes weekday selector, which wasn't appearing correctly when selecting Schedule > Weekly
  • Fixes Preferences > Links > Do not report redirects which was apparently not working.

v8.0.9 released April 2018

  • Further measures to reduce 'false positives' (which is an important v8 feature). In this case, 403 (forbidden), may be returnedin some cases if useragent string is Googlebot or not a browser. Where a 403 is received, and the user has useragent string set to Googlebot or Scrutiny, then the url is retried once, with cookies, GET method and useragent string set to that of a regular browser
  • Doubles the alt text buffer, alt texts of more than 1,000 characters were regularly being seen.
  • Some fixes to the reporting (full / summary / csv / pdf) - possible crash when generating that manually or as a finish action, and SEO radar charts.
  • Fixes spelling dialog so that it properly shows grammar details
  • Fixes situation where there are no spelling results to report but are some grammar. Scrutiny was claiming from the tasks screen that there were no spelling or grammar problems to report and leaving the tables empty.

v8.0.84 released April 2018 - v8 becomes the main release

  • Fixes problem sometimes seen in meta data. Keywords and description could show spurious values depending on the order that the meta data appeared
  • Fixes recent issue with code signing. For a short time builds will not have run without lowering of security settings

v8.0.82 released April 2018 - still beta

  • Updates list of user-agent strings in the preferences drop-down list, gives Scrutiny a more compliant one.
  • Fixes bug causing scan progress bar sometimes getting stuck at the end and not to switch to the results screen
  • Fixes bug causing some previous data to not be cleared from the Links by status and Links flat view

v8.0.8 released March 2018 - first beta release of Scrutiny 8

  • Many fixes to the beta including significant work to the website config handling

v8.0.7 released March 2018 - first beta release of Scrutiny 8

  • All the new features of the v8 scanning engine.
    • Some data structures redesigned, making for some serious efficiencies when the app is running, at the end of the scan and while browsing the data.
    • More information collected about your links, and meta data from your pages.
    • More information collected and displayed about redirects

v7.6.13 released April 2018

  • Fixes problem where image urls from a long srcset could appear truncated or not reported (caused by a buffer overflow)
  • couple of changes to avoid problems where user creates website configs using version 8 and then returns to version 7

v7.6.12 released March 2018

  • Fixes bug that prevented full scanning if port number used in the starting url
  • If a site config is deleted with a schedule still set, the schedule is now correctly removed before the site is removed.
  • Fixes problem in isFinished causing multiple instances of the archive dialog
  • Fixes problem with archive causing a hang (archive and browsable settings had to be on)
  • Fixes percentencoding bug which caused crash under very unusual circs (an unusual character in the link href and unusual page text encoding)
  • Fixes bug in defaults sync which might have caused some odd effects when creating new config / adding / deleting rules etc.

v7.6.11 released March 2018

  • Efficiency improvements, reducing pause at end of scan, noticeable with large sites (counting pages with possible duplicates)
  • Improvement to IDN functionality, specifically if page contains percent-encoding within domain part of url, wasn't being handled properly.
  • Sorts a problem with redirects, where a url is redirected to a url already in the list. Sometimes this could randomly result in an odd status being reported, (302 < 302 rather than the correct 200 < 302)
  • fixes bug causing urls from a srcset attribute to not be reported if not preceded by a regular src attribute

v7.6.10 released February 2018

  • Restores ability to scan website locally
  • Adds two new columns to SEO table - title length and description length - they're optional, you can switch them on using the column selector above the table, and they're sortable numerically.

v7.6.9 released January 2018

  • Some internal updates relating to the rules changes in the last point release
  • Fixes bug in 'highlighting', if the link occurred more than once on the page, only the first would be highlighted properly.
  • Adds ability to scan Wix site. No visible option for user, Wix site is autodetected
  • We don't endorse or encourage the use of Wix, their dependency on ajax breaks accessibility standards and makes them difficult for machines to crawl (ie SEO tools and search engine bots) and impossible for humans to view without the necessary technologies available and enabled in the browser.
  • Fixes bug causing potential crash if pages were excluded from sitemap for both possible reasons and user pressed the button to see the 'more info' button
  • Fixes minor bug in column selector above certain tables, for French users.

v7.6.8 released January 2018

  • Some improvements to 'rules' dialog:
    • Rules dialog opens as a sheet attached to the main window, rather than randomly positioned on the screen
    • Adds 'urls that contain...' and 'urls that don't contain....' option giving much more flexibility
    • (removes 'only follow'. The wording of this became confusing in certain cases (eg if you have more than one of those rules) and it's no longer required because it's the same as 'do not follow urls that don't contain' )

v7.6.7 released January 2018

  • Fixes bug preventing keywords from showing in SEO meta keywords column
  • Some small improvements aimed at preventing occasional hang or crash when scan finishes

v7.6.6 released January 2018

  • Important update for French users, when using French localisation, blacklist rules ('Ignore links containing' etc) would have appeared not to save.

v7.6.5 released January 2018

  • Some fixes relating to 're-check' from context menu items - fixes possible crash or apparent inaction after using that context menu item
  • (When re-checking from the 'by page' or 'by status' views, no feedback is given to the user until the re-checking is complete - this fact is noted)
  • Fixes problem with visualisation (.dot) export, some connections weren't being included under some circumstances.
  • When exporting .dot file, the 'cleaned up sitemap' is no longer marked 'recommended' and the full file will be the default. This ties in with imminent changes to a new version of Siteviz (which is the visualiser built into Scrutiny) which does the 'cleaning up' itself (ie removes links that go 'upstream'). It's now best that all links are included in the .dot file because siteviz (and in the near future, the visualiser within Scrutiny) will display the number of internal backlinks and colour nodes according to how many inbound links there are.

v7.6.4 released November 2017

  • improvements re scanning a sote locally (improved handling of relative links '/example.html' (relative to the site root) Scrutiny now constructs that url relative to the directory of your starting file:// url which is most likely to be correct - previously constructed relative to drive root)
  • enables sorting in the new h1's and h2's columns of SEO table
  • built with greater level of optimization

v7.6.3 released November 2017

  • Fixes problem that could cause some instability when scan is started at a local (file://) url

v7.6.2 released November 2017

  • Fixes problem with discovering all frame urls within a frameset
  • Adds detailed diagnostic window - shows details of the http request and response, data received, the values of important settings etc. for the initial URL. This window will be offered via a dialogue if the engine didn't crawl any or many links. It's also available if appropriate via a triangular button below the number of links found in the Links results and at any time via the Tools menu

v7.6.1 released October 2017

  • Some additions to the French localization

v7.6.0 released October 2017

  • Important fix re reporting. Particularly re the 'scan with actions' and 'perform actions' options where 'generate report' is selected in 'finish actions'
  • French localization completed
  • Fixes bug preventing SEO information (title, description) from being reported if starting with a text list of urls.
  • Adds 'redirects to here' column in SEO table. A count of the number of other urls that redirect (via 3xx or meta http refresh) to this page. The column is easily switched on and sorted to find the ones with the most. This is now an important SEO factor, Google considers a page to be a 'soft 404' if many pages redirect to it.
  • Adds option for spellchecker to only search contents of

    tags. (extends existing option to ignore
    and

v7.5.10 (not generally released) September 2017

  • Some fixes and improvements to the 'file size' functionality. And adds option to 'load all images' WIth this option on, all images are loaded and the size noted. So the 'target size' column of the 'by link' and 'flat views' will show the actual size of the image. With the option off, a size may still be displayed in those columns, but it then relies on the Content-Length field of the server response header, which may be the compressed size of the image or not present. The option slows the scan and uses more data transfer, so only use if you're interested in the size of images on your pages.

v7.5.9 released September 2017

  • Fixes links incorrectly found within javascriopt
  • Fixes problem causing bad link count to be a little higher than the actual number of bad links. (Caused by certain external urls responding with error butreturning OK when automatically retried, the bad link had already been counted and wasn't reset)

v7.5.8 released September 2017

  • important release for users of High Sierra
  • Fixes problem that could cause incorrect link text to be reported
  • Where appropriate, Integrity uses the HEAD method for efficiency. However, some servers incorrectly return a 404 or 5xx in response to a HEAD request. Such urls are now automatically retried using GET.

v7.5.7 released August 2017

  • Adds more French localization
  • Adds Regex option in site search. If 'collecting' is used within the expression, the result of the collection will be shown in the 'search terms' column of the results
  • fixes bug with site search if searching visible text
  • improves information on 'scanning' screen, informs if you've selected 'scan this page only'
  • Wasn't recognising a .txt file as a list of links (as it was .csv).

v7.5.6 released August 2017

  • Adds case sensitivity when checking file:// urls there's a new option on the 'Advanced' tab of settings and options, case sensitivity is on by default.
  • Fixes incorrect handling of base href = single forward slash, now correctly interprets as "relative to the public root"
  • Fixes crash or hang under particular unlikely circumstances

v7.5.5 released August 2017

  • Fixes bug which prevented some srcset (2x etc) images from being found
  • Increases stability and efficiency under certain circumstances
  • Fixes minor problem with the 'delay' functionality (for throttling requests). The bug caused this setting to sometimes not be observed.

v7.5.4 released August 2017

  • Reduces min width of main window to 1080px for users of smaller / portrait mode screens.
  • Fixes bug causing scan to stall if crawling locally and site is on an external volume

v7.5.3 released August 2017

  • Adds options to ftp dialog (sitemap export) to use TLS, and adds field for port number (defaults to the usual 21)
  • Fixes bug causing ftp dialog details to not be saved
  • Some other small improvements such as validation of the directory field

v7.5.2

  • since 'head' method started being used, pdfs weren't being parsed (if 'check links within pdf's' was switched on)
  • Fixes bug causing html pages to be excluded from SEO results and Sitemap if it contained no links
  • Handles urls which contain hash in the middle of url (previously taken as a fragment and removed)

v7.5.1 released July 2017

  • fixes issue with links not being found after self-closing script tag in body (<script .. />)
  • fixes issue with <img src-data= causing a garbage link to be reported
  • Improvements to the way that pages are selected for the SEO results. The earlier method for avoiding duplicates may have caused some pages to have been incorrectly excluded
  • Some changes to the redirect chain functionality. The redirect chain count appeared in the SEO results, and still does because it's an important SEO issue. But the SEO table shows pages rather than links, and it's illogical to filter that table for redirect chains because pages don't have redirect chains, links do.
  • So the 'redirect chain' option in the Filter list now offers to show the Links results, sorted by redirect count.
  • further improvements to engine; where HEAD method is used and 405 (method not allowed) is received, connection retried with GET.
  • Adds ability to begin crawl at an xml sitemap where the sitemap is a sitemap index file which links together a number of xml sitemap files
  • More improvements to the new advanced options which will be helpful in a small number of cases
  • Where a timeout is encountered, Integrity will now invisibly retry once, in case it's a spurious or short-lived problem
  • fixes bug with sitemap generation, if sitemap was large enough to need splitting into multiple files, one was being missed from the sitemap index file
  • fixes bug affecting sitemap transfer (ftp)

v7.4.4 released June 2017

  • Fixes problem with 'Learn all selected' button on Spelling results window
  • Fixes parsing problem that could cause spurious links (incorrectly found within javascript)
  • Small change that helps stagger multiple simultaneous requests

v7.4.3 released June 2017

  • Adds 'meta refresh' column to Links tables 'by link' and 'flat view'. The column is sortable, so makes it easy to find all of the meta refresh redirections on a site.
  • If a link is redirected by meta refresh, the Preference 'don't report redirects at all, only the final status' is now correctly observed
  • Fixes bug causing urls to be duplicated in the sitemap under certain redirection situations
  • Adds French localisation to all of the context help (this is only a first step, all buttons / labels are being translated.)

v7.4.2 released June 2017

  • Fixes bug causing apparent random crash
  • Improvements to thumbnail creation - with some sites the thumbnail could appear blank or incomplete

v7.4.1 released June 2017

  • Improves javascript rendering process. Relevant to Wix sites which rely on this functionality (The advanced setting 'Render JS' needs to be switched on, and ideally number of threads turned down. It will take time, and computer resources.)

v7.4.0 released May 2017

  • Adds support for IDNs - start with either the unicode or encoded version, the unicode version will be displayed, the http requests will be correctly handled using IDNA encoding
  • Fixes interface bug, if a starting url was edited, and the editing finished by clicking outside the field, the 'scan now' button wouldn't work properly until a different site selected
  • Fixes issue where 'by link' view was caching previous data after loading saved data
  • Fix to 'hightlight' option in context menu of by link view
  • Increases maximum width of 'url' column of various tables, so that longer urls / querystrings can be displayed (tip: or just hover and see it in a tooltip)

v7.3.2 released May 2017

  • Adds option (on by default) for the spell-checker & grammar checker to ignore content marked up as <nav> (html5 tags)
  • Adds option (also on by default) for the spell-checker & grammar checker to ignore content marked up as <header> and <footer> (html5 tags)
  • Adds option to ignore or include image alt text within spell check (also on by default)
  • The above options are in Preferences > Spelling
  • Fixes bug causing Scrutiny to fall over in a peculiar set of circumstances (if canonical url has fewer than 7 characters and the 'treat http and https versions of url to be the same' option switched on)
  • Some safeguards added - if the the starting url has whitespace or return characters pasted or typed, these are trimmed before attempting to start crawl
  • whatsapp: links are now ignored (along with mailto: tel: etc) rather than incorrectly reported as bad links

v7.3.1 released May 2017

  • Allows generation of a sorted list of images by file size, and which pages they appear on (adds 'target size' column (optional) to the Links 'by link' and 'flat' views)
  • Fixes a couple of issues with keyword analysis and adds some information to the Help files
  • Fixes a problem with the preview of images without alt text (double-click a table row to open a preview of the image)
  • Fixes problem of keyword count in headings not being displayed properly since the change from a single headings column to separate columns for h1, h2, h3, h4
  • Some improvements around site search (if a list of search terms is pasted in from a windows-formatted text file, the different carriage return characters could cause some issues. Patched now)

v7.3.0 released April 2017

  • Improves insecure content reporting.
    • Adds a new table of results - this shows all issues - secure pages which contain links to insecure ones, and pages with mixed content. It's expandable to show the details in all cases. It can be exported to CSV or HTML
    • These results (if there are any) are available from the Results Selection screen, and are saved with autosaved / manually saved data
    • Links > Filter > http: links and SEO > Pages with mixed content will work as before
  • Adds 'Manage Autosaved Data' (access through Tools menu or cmd-5). Window shows all autosaved data, allows sorting, and allows deletion, either move to trash or immediate delete
  • For new users, the Autosave feature is on by default.
  • Adds disc space check before Autosaving data, and an alert with advice if disc space is low.

v7.2.1 released April 2017

  • Fixes problem with exporting spelling results as csv or html
  • Adds warning symbol if starting url is good but no links are found initially - links to popup with some advice about settings (javascript / cookies required?)

v7.2.0 released Apr 2017

  • Adds option for orphan check to scan a local directory (previously only ftp) and compare with the website scan. (as before, this will obviously only work for static sites)
  • Some changes & fixes to the existing orphan check functionality
    • orphan data is now included in the autosave for the site
    • Adds 'check for orphaned images / pdfs
    • ftp directory blacklist now accepts file extensions for ignoring
  • Adds 'redirect chain' report to SEO table
  • Adds 'Redirect count' as a sortable column to the Links 'by link' view
  • Adds 3D theme to sitemap visualisation
  • Improvements to 'headings' within SEO table
    • collects and can display heading levels h1 -> h4
    • Adds columns to SEO table to show h1, h2, h3, h4 separately (as before, each column shows a comma-separated list if there are more than one heading at that level)
    • If you know that you won't need all those heading levels, there is a hidden preference to set the maximum level that you want - this can save resources Terminal: defaults write com.peacockmedia.Scrutiny-7 headingLevelMax 3
  • Adds 'copy urls' to the context menu when multiple items are selected in all links tables. (cmd-C also enabled where multiple items are selected). a return-separated list of the selected urls is copied to the clipboard.
  • Other small fixes
    • Fixes double save dialog before export full report as pdf
    • Fixes a problem that sometimes prevented 'Continue scan' from continuing properly (it would appear to check a few links and then stall)
    • Fixes Sierra -specific problem with some alert boxes hanging.
    • Fixes problems of incomplete information after a manual save and re-load of data
    • Better handling of an unusual situation where 'content-type' isn't returned in a response header. In that case, Scrutiny now assumes html and attempts to parse it as such.

v7.1.6 released Mar 2017

  • Improves built-in help files. Under the help menu you'll now find a link to the support form, the browsable version of the manual and a pdf (printable or savable) version
  • animated dock icon has a 'sweep' which indicates progress
  • fix to archiving functionality / browsable format for asp pages
  • adds active licence key number to About box

v7.1.5 released Mar 2017

  • Small but important fix to the site search.
  • Fixes bad links not being saved to csv as part of the full report.

v7.1.4 released Mar 2017

A number of fixes around the sitemap functionality, exclusion of pages from the sitemap and canonical urls:

  • Adds a button for viewing pages which have deliberately been excluded from the sitemap. It opens a table showing the url, canonical url and the reason that the page has been excluded. The table has context menu for copy url and visit
  • Where a page has a canonical url pointing to itself, this page may have been incorrectly excluded from the sitemap in the past if the canonical url's capitalisation is different from the page url. This match is now checked in a case-insensitive way.

Further fixes to 'check links within pdfs' functionality:

  • Fixes problem with link text reporting (within pdfs)
  • slightly increases link target area to increase likelihood of capturing the link text

Adds standard 'Help book' manual. Find this under the Help menu. This will be under continuous review and improvement.

Other small fixes

  • Fixes a problem with the column selection button on the Links Flat view
  • Fixes context help within Preferences window

v7.1.3 released Feb 2017

  • Fix to 'check links within pdf documents' setting
  • Fix to the 'page urls have no file extension' checkbox. If it has been set as a result of user answering 'page' to the question in the dialog box that pops up when you start the scan, and you ultimately quit without changing any of the other settings, then the setting may not have been checked when Scrutiny next opened

v7.1.2 released Feb 2017

  • Fixes problem with Preferences > Sitemap > Template, editing this in earlier versions of 7 caused odd behaviour.
  • Fixes problem with the engine not always recognising an end comment where it looks like ---------------->
  • Adds a useful context menu to the 'live view' table, containing 'copy URL' and 'visit URL'

v7.1.1 released Jan 2017

  • fixes problem with meta description being duplicated in SEO table if twitter meta description is present
  • adds 'pdf documents' to the filter drop-down for link results
  • adds update check
  • adds 'search terms found' column to site search results table. (This makes things easier if you've searched for multiple search terms)
  • tidies up some of the behaviour when adding, editing or deleting sites (window title, breadcrumb widget etc)
  • tidies up display of site search results (sitemap controls now hidden)
  • tidies up niggles with sitemap rules window

v7.1.0

  • First full launch of Scrutiny 7 - out of beta
  • 'Document-based' - have as many windows open as you like to run concurrent scans, view data, configure sites, all at once.
  • New UI, includes breadcrumb widget for good indication of where you are, and switching to other screens. Also includes more logical flow - choose to run a scan, then choose how to view your results (Links, SEO, Sitemap, whatever).
  • Organise your sites into folders if you choose.
  • Autosave now automagically saves data for every scan, giving you easy access to results for any site you've scanned.
  • Better protection when disc space is low, scan should stop before catastrophe happens. Each separate scan that's running will give an option to pause or continue regardless, when space on system disc ('/') reaches 750Mb
  • Better reporting - summary report looks nicer, full report consists of the summary report plus all the data as csv's

v6.8.21 Released Jan 2017

  • fixes problem with meta description being duplicated in SEO table if twitter meta description is present (twitter:description)
  • Adds Googlebot's user-agent string to the drop-down list of UA strings in Preferences
  • Fix to user-agent string field in Preferences, changes weren't always recognised right away

v6.8.20

  • Fixes 'always use this directory' (when saving archive at the end of scan) - previously this was not remembered if using the 'convert to browsable format' option.
  • Fixes bug causing spurious bad links to be reported where 'check linked files' is switched on and certain sequences of characters appear in javascript within the head of a page.

v6.8.18

  • Adds 'ignore session id within querystrings' - allows you to not ignore the whole querystring, but ignore the session id within it. Useful for forums where querystring is important, but session id's cause crawl not to complete. This is a 'per site' setting and (in version 6) is within the Advanced window.
  • Fixes obscure problem which occurred when canonical is given as just "http://" or "https://"
  • Improvements to archiving in browsable format: handles querystrings and php sites (obviously php pages will then be html snapshots, not active php)

v6.8.17

  • Prevents a crash that could happen at the end of the scan (when progress bar finishes, before results are displayed)
  • Much improved context help system. Discreet 'i' buttons beside many settings pop up some useful advice about the setting, with a button for the support form

v6.8.16

  • Fixes a bug which was causing the orphan check (ftp phase) to fail and be unstable.

v6.8.15

  • Fixes bug causing links to have blank url if the found url contained a particularly unusual percent-encoded character or one that doesn't convert in the claimed encoding
  • Fixes problem with the Robotize view hanging or crashing sometimes when browsing a secure site
  • Now ignores link targets in double curly braces, ie href = "{{ something}}" - used as placeholder in certain content management systems (eg Angular, Expression Engine). Previously Integrity was incorrectly constructing an absolute url and testing it. Note that such links can be rendered properly and tested using Scrutiny's 'render javascript' feature.

v6.8.14

  • In case where a page uses the Refresh server response field, and has a large time delay, this could cause Scrutiny to hang at the end of the scan.
  • Fixes problem deleting items from the black/whitelist rules table (when an ignore rule exists and the user is trying to delete one of the rules below that)
  • Fixes problem with site search dialog capable of being sized too small to show search term field

v6.8.13

  • Important fix to Export > Sitemap > CSV. Since 6.8.12, could hang.

v6.8.12

  • Adds column to search results table, to show search terms found. Thus if you search for multiple terms, you can see which were found on which page.
  • Fixes obscure problem where /head appears within the canonical url, this mistaken as the /head tag, leading to some spurious code appearing in the link results.

v6.8.11

  • Important fix for anyone scanning locally. Fixes bug present since 6.8.6 which could cause scanning of local files to stall.

v6.8.10

  • Adds 'Always trust invalid server certificate' setting under 'Advanced settings'
  • Further tweaks to the 'render js' function, to make it a little more reliable. (NB, this feature works best with threads turned right down and possibly timeout increased.) There is still a memory leak here (which I believe is within Apple's WebKit rather than Scrutiny) which you may experience if you scan a large site (thousands of pages) with the js feature switched on. There is a workaround to the memory problem, please contact support for details
  • Fixes problem with scope. This affects users who have entered an ambiguous starting url (without a file extension) and answered the question to say that it's a page rather than a directory. That setting will be remembered (deliberately) but previously the setting was then 'stuck' - even if the starting url is edited (or if the question was answered incorrectly first time) Scrutiny would alway search the entire site, even if your starting url then ends in a directory. Now that setting is reset when you edit your starting url

v6.8.9

  • When XML Sitemap is generated, if the file is larger than 10MB or 50,000 URLs then it will be broken into multiple parts. You only need to specify your filename once, the first file is a sitemap index file using the filename that you specify (eg sitemap.xml) and additional sitemap files are numbered (eg sitemap-1.xml, sitemap-2.xml etc)
  • Note that the links within the sitemap index file (if generated) will be *relative urls* (eg "sitemap-1.xml", "sitemap-2.xml" etc) We're so far unable to establish whether this is acceptable. If not, it may be necessary to edit those urls to include the full web path.
  • Fix to 'mixed content' filter of SEO results

v6.8.8

  • Fixes Spellcheck results not appearing after re-loading in data (data needs to be saved with v6.8.8+ and loaded back in with v6.8.8+. And of course the 'check spelling' checkbox must have been checked when the original scan was performed.)
  • Fixes spurious links being reported when an <a > link doesn't contain an href, but does have an onClick javascript event which contains a url.
  • Fixes the 'observe robots noindex' checkbox in Preferences > Sitemap. This checkbox controls whether pages are included in the sitemap if the page contains a robots meta tag with the noindex attribute. Previously this checkbox had no effect, pages were always excluded from the sitemap if they were marked 'noindex' (which is most likely to be what's desired, and the default for this checkbox will be 'on')
  • The wording is improved on the robots.txt and noidex checkboxes (Preferences > Sitemap)

v6.8.7

  • Adds 'mixed content' to SEO results - if the crawl starts at an https:// url, and resources (linked files or images) are found that have http:// urls then they are shown as having mixed content in the SEO results
  • Adds columns to the SEO results table - Link count already existed (the total number of links on a page) but now there are two additional (optional) columns to break this into Internal links and External links
  • Moves the subdomain switch ('Consider subdomains of the root domain internal') from Preferences to site settings - ie is now set 'per site' rather than globally.
  • Fixes problem introduced recently in 6.8.5 - could cause incomplete crawl when the javascript option is switched on.
  • Catches situation where tag in xml sitemap is empty for obscure reason, and inserts default value.
  • if images are being tested, and the sitemap is set to include them, where an image was a linked file (eg favicon) then the spurious text [linked file] was being included as the images title in the xml sitemap. This has been stopped.
  • Percent-escapes spaces in urls when generating xml sitemap.

v6.8.6

  • Adds support for the server header field "refresh". (Not official web standards but has been supported by most browsers for a very long time.)
  • Improvement to completion of referer field in http requests where a redirect is concerned

v6.8.5

  • Some improvements (memory efficiency) with javascript rendering functionality
  • Bug fix with javascript functionality - if a resource within the page redirected to another url, Scrutiny would show the page itself as having redirected to that url. This could cause the scan to stall if this happened on the starting page
  • Fixes bug causing Scrutiny to not scan pages or stall completely if CR's present within link tags

v6.8.4

  • Improves the responsiveness of the links 'By Link' view, which may have become difficult to use after a long scan.

v6.8.3

  • Adds some charts - animated ones near the progress bar while the scan is taking place, fixed ones on the Links results (piechart) and the SEO results (radar chart). Same charts appear on the Summary report (File > Export > Summary Report).
  • Adds checkbox to Preferences - Show charts while scan is running. (On by default, can be switched off).
  • Hides setting relating to Wordpress / SEO-friendly urls, as this is detected automatically
  • Trims whitespace that may have accidentally been pasted when adding the site
  • Fixes some spurious non-existent links found when hreflang is present within <link > or <a > tags
  • Fixes bug in the new 'check form actions' button, using this option could cause some pages to be excluded from the sitemap
  • A fix to meta http refresh functionality (previously could crash under unlikely circumstances of a link being found on a meta-http-refresh page
  • meta refresh now also handled in the redirect trace window.
  • Moves the 'render javascript' button to the Advanced settings, should only be used if there is content on the site that can *only* be discovered if javascript is enabled. Adds a message to that effect under the checkbox

v6.8.2

  • Adds much easier way to select columns for certain tables (links flat view and by link, and SEO) - a menu pulled down from a button just above the table
  • Fixes problem with images / image weight - previously wasn't including all images
  • Fixes problem with 'exporting disabled' message appearing even after licence is activated
  • Fixes problem with 'scan with actions' which was just hanging after performing the chosen actions. Now properly returns to tasks screen.
  • Adds 'Depth' as a column in the SEO table (min number of clicks to reach from the home page). This column has already been appearing in the Links tables, but was called 'Distance', now renamed 'Depth' in those tables

v6.8.1

  • Improves authentication by web form
  • Fixes possible mistaken links 'found' within javascript
  • Now makes sure quotes are trimmed from meta refresh url
  • Some ../ weren't being correctly resolved if they appeared within the middle of a relative link - improved now
  • Adds preference to be tolerant (ie not report a problem) in cases where a ../ travels above the root domain. Although technically an error, browsers tend to tolerate this (assuming the root directory) so such links will appear to work in a browser.

v6.8

  • Adds ability to search for multiple search terms by pasting a CR-separated list into the search term field. (The dialog sheet can be resized so that you can see your list.)
  • Adds button to search dialog for the contents of the field (if it contains carriage return characters) to be treated as a single term or multiple terms
  • Multiple search terms are 'OR'd
  • Small fix to meta refresh redirects
  • Fixes glitch with licensing panel, 'exporting disabled' message was still appearing after users had bought in-app. (cleared by quit and re-start).

v6.7

  • Supports pattern matching within your robots.txt file, eg disallow: /*? . Also adds the same pattern matching functionality to the blacklisting / whitelisting. NB limited to * for 'any number of any characters' and $ meaning 'at the end', eg /*.php$
  • Fixes bug causing pages to be incorrectly reported as noindex under certain circumstances
  • Fixes sorting on Links / Flat view table.
  • links limit in Preferences is capped. Previously, entering a stupidly higher number could cause problems.
  • Fixes bug causing some spurious data to be included in the link check results, when 'check linked js and css files' is switched on
  • Reduces some initial memory allocation - more memory efficient when scanning smaller sites.

v6.6.5

  • Fixes small problem causing javascript variables to appear in spell-check results. Obscure problem unlikely to have affected many.
  • Fixes unlikely bug causing occasional links to be listed with doubled-up characters, eg hhttttpp:://// etc
  • Fixes bug causing absolute urls to be constructed incorrectly under certain circumstances (if a page is redirected more than once, the first redirect url has a trailing slash but the second doesn't. Problem was possibly more likely to appear if 'render js' was switched on.)

v6.6.4

  • Fixes bug with detecting and handling http meta refresh
  • Adds 30s timeout to ftp of sitemap xml
  • Adds 'only include each image once' option to xml sitemap

v6.6.3

  • handles a problem where unexpected things are present in the base href. In unlikely cases this could lead to a crash, now handled gracefully
  • small change to authentication where form uses the security token system (better handling of passwords with unusual characters)

v6.6.2

  • Fixes problem with script variables appearing in spell-check results

v6.6.1

  • After viewing rogue http results (links to http site when starting at a https site) via dialog, if user uses 'Previous' to go to tasks list and then 'Go' for links results without scanning again, filter is reset to 'All'.

v6.6

  • Adds option to check linked files (linked external stylesheets and javascript files etc) while scanning
  • Fixes a number of problems related to filtering / sorting of your list of sites, including confusion of two configs with the same starting url

v6.5

  • When scanning secure (https) website, can alert user to links which take the user / bot onto the http version of the same site
  • Adds preference to show alert for the above when the scan finishes and offer to show details (Preferences > Links > When scanning secure (https) website)
  • Adds preference to consider links to the http version of the site to be 'external' (ie don't follow & don't include in sitemap)
  • Both new preferences are switched on by default

v6.4

  • Important enhancement to authentication functionality. Can successfully authenticate where login form uses a security token
  • Checkbox added to advanced settings to allow this feature to be switched on or off. If authentication is still unsuccessful after filling in the username, password and the names of those fields, try switching on this feature

v6.3.1

  • Fixes problem since v6 with finding link urls within image maps

v6.3

  • Improves 'archive' functionality:
    • adds option: don't show save dialog each time (remember and use the same location each time)
    • also adds option to process the archived pages rather than just dumping the html as before - process inks / images etc within archive, and recreate directory structure so that pages display properly in a browser and can be browsed. (a la sitesucker)
  • These options are available in the save dialog after the crawl finishes, or from an 'options' button beside the 'Archive pages while crawling' setting

v6.2.2

  • Adds 'ignore' to the options for blacklist rules, eg "ignore urls containing..."
  • Adds option (per site) to follow form actions. Note that Scrutiny will not send a post request to the form action and it will not send any fields, so the page reached may be a validation error, but it will serve to highlight the submission url being invalid (eg 404)
  • when using authentication and sending a password to a web form which only requires a password (no username) the form is now submitted with only those two boxes completed (the password and the name of the p/w field). Previously it would only send if the username and u/n field name boxes were filled in.
  • Note that to trigger a POST request to your starting url, at least the password field and password field name must have something in them. To send a POST to your starting url without sending a password, just use those two boxes to send your first field name and value, then the 3 custom POST form fields to send 3 more. Use the username field to send a fifth if you like.
  • small improvements to engine for html5 pages

v6.2.1

  • adds 'recheck this url' to context menu in a number of links views - those views now allow multiple selection. This allows a mass recheck of certain urls (eg if you have a number that timed out) or a 'mark as fixed', again for a multiple selection.
  • fixes a silly bug causing some unexpected things to start happening with the site selection following an error while saving (internal note, this happens because for reasons uncertain, a config file can incorrectly get saved with the filename 'untitled'. This messes with the config manager. If this has happened and isn't immediately fixed when running 6.2.1+ then the fix is to quit Scrutiny, find the 'untitled' config file and rename it to anything, then restart Scrutiny.)
  • small efficiency / speed improvement

v6.2

  • adds 'live view' by popular request. Allows results to be viewed while the scan is happening, either to spot any problems early (ie with settings or maybe timeouts etc) or just for visual feedback or because it's fun to watch the action.
  • Live view eats into the speed of the scan and system resources. For small sites this is insignificant, but for larger sites, liveview isn't recommended. A message to that effect is shown once per session when triggering live view.

v6.1.5

  • another fix to words within javascript appearing in the spell-check results
  • fixes possible failure when exporting sitemap data as csv
  • now exporting disabled when app is running in trial mode

v6.1.4

  • fixes a couple of small bugs in the parser which were causing spurious words from the code to occasionally appear in the spell-check results (notably where an img tag has alt="" but other places too.)
  • If user has been viewing images without alt text in SEO results, and switches back to the page tab, the filter button is returned to 'all' to avoid a blank table being shown

v6.1.3

  • fixes problem relating to images with links, which may have caused some pages not to appear in the SEO results table or sitemap.

v6.1.2

  • fixes problem that may have been observed with spurious results in the robots column of the SEO table
  • now correctly handles links with href "./"

v6.1.1

  • Fixes problem with 'don't follow nofollow links' setting, was causing incomplete crawl
  • fixes problem that was preventing some meta description tags from being reported

v6.1

  • fixes a problem preventing a site from being crawled properly if the starting url redirects to a different domain / subdomain.
  • improvements to engine, fixes some instability in 6.0.x
  • fixes a problem causing missed links in html5 pages, specifically those that use
    tags
  • fixes a problem causing links in a text list to not be followed
  • If starting url is ambiguous (could be a page or directory) Adds a dialog to check this with the user as this affects the scope of the scan (remains within the starting directory)

v6.0.8

  • Fixes bug causing commented out titles etc in head to be reported
  • Fixes display of headings in page inspector (double-click item in SEO table to display). Now displays headings as outline
  • Fixes problem that could cause hang when crawl finishes and SEO results open
  • Improves reporting of headings - trims whitespace / returns from the heading text

v6.0.7 (no longer beta)

  • Better handles extremely large html files (multi-Mb)
  • Fixes recent problem causing Scrutiny to miss description meta tags

v6.0.6 (no longer beta)

Improvements inherited from the v6 engine:
  • improves checks and balances where large files are returned without content-type or content-length headers. Attempting to parse these for html could break internal limits and cause the scan to crash.

v6.0.5 (no longer beta)

Improvements inherited from the v6 engine:
  • adds ability to start from xml sitemap or plain text list of urls (absolute urls, scheme (eg http://) must be present) (automatically detected). The feature existed in previous versions of Integrity and Scrutiny, but not implemented so far in the beta
  • ignores html tags in meta description

v6.0.4 (beta)

  • Fixes problem (in v6 beta, not the stable version) with Integrity finding frame urls within a frameset

v6.0.3 (beta)

  • Fixes possible hang when opening spell-check dialog for certain pages
  • Fixes a more general problem of Scrutiny not always handling urls properly during certain operations if they contain spaces or other special characters that need percent-escaping
  • Adds 'export' button above sitemap visualisation allowing exporting of the image as png or pdf
  • Tweaks to SiteViz, better at keeping the chart within the edges of the view and scrolls the scrollview to keep the chart within the viewport when the chart is zoomed or altered
  • Updates links to manual (context and Help menu) to an updated manual for v6

v6.0.2 (beta)

  • adds context menu to flat links view
  • adds double-click to open link inspector to by status view
  • adds 'mark as fixed' in the context menu for the By link view, and button for the same to the link inspector
  • adds new feature to trace redirects. Link inspector shows a button beside the 'redirect url' field. The button shows the number of redirects (usually 1 but can be more). Pressing that button triggers a new request and a trace of the redirects as they happen
  • adds preference to not report redirects at all, just the final status. (Checkbox in Preferences > Links)
  • adds 'export summary report' to File menu (previously only available after scanning with actions or a scheduled scan)
  • adds some other export options to File menu (previously only available using the export button above the relevant table)
  • fixes bug with export summary report as HTML, report was not saved if report to replace an existing file

v6.0.1 (beta)

  • fixes bug in v6 causing hang if autosave is switched on

v6.0 (beta)

  • Fitted with the 'v6 engine' - faster and more efficient crawling
  • Incorporates Robotize (for viewing sites as a robot sees them - text-only, images as alt text, linearised, headings as an outline etc)
  • Displays visualisations directly (see new tab within Sitemap results) (previously had to export as .dot and open in a graphing app)

v5.9.21

  • Fixes an issue some users experienced with v5.9.20 failing to start

v5.9.20 released Nov 2015

  • Reference to HTML validation removed from tasks screen (feature has been unsupported for a while). This allows the summary report to be generated and the scan won't unnecessarily include the lengthy validation crawl
  • Users with a local instance of the w3c validator (pre nu engine) who find that Scrutiny was successfully using it, should continue to use v5.9.18
  • Adds a 'HTML Validation' menu option to the context menu of the SEO table so that validation can still be tested on a 'per page' basis

v5.9.18 released Oct 2015

  • fixes bug which prevented crawl from continuing if started at a frameset

v5.9.17 released Oct 2015

  • fixes bug that could cause scan to 'stall' sometimes if running on minimum threads and pdf feature switched on

v5.9.16 released Oct 2015

  • fixes bug causing old pages with certain specific characteristics to be 'status checked' but not parsed for links resulting in not all pages being scanned. (Unlikely to have affected many sites)

v5.9.15 released Oct 2015

  • fixes bug causing urls with bad status (4xx or 5xx) to appear in xml sitemap if labels are switched off in Prefs

v5.9.14 released Sep 2015

Improvements to page search:

  • adds option to search entire source (as before) or just visible text
  • fixes problem preventing pages which are robots=noindex from appearing in the search results
  • adds info above sitemap results table, shows pages excluded for any reason
  • adds info to SEO warnings, shows number of pages containing robots noindex

v5.9.13 released Sep 2015

Improvements around xml sitemap

  • Adds option to include pdf pages in xml sitemap
  • Alters sitemap results table so that it shows the priority that will appear in the sitemap rather than the distance from the homepage
  • Priority is editable within Integrity's results table
  • Small update to the generated xml code to fix a validation warning from Google when images are included in the sitemap

v5.9.12 released Sep 2015

  • Includes fix for crash experienced where the site includes link(s) to docs.google or drive.google and Integrity is being run on OSX Yosemite. Crash will have started happening since the beginning of September 2015 and will have happened at a consistent point in the scan.

v5.9.11 released July 2015

  • Fixes bug responsible for crash after Pausing a large site or possibly on completion
  • Improves the speed of filtering and searching the flat view, unresponsiveness (spinning beach ball) after filtering or searching was a problem with very large sites, particularly when clicking some of the options in the export dialog
  • Fixes a problem with the colouring (orange or red) in the flat view - the wrong rows could be coloured after certain selections with the filter or search
  • Fixes a problem with the link inspector showing the wrong link from the flat view if the list has been filtered

v5.9.10 released July 2015

  • Fixes inaccuracy in html code of html exports
  • Adds button to reverse the text search, ie Scrutiny can now display pages that *don't* contain given text or code
  • Adds QuickView for images in SEO search results, images table. Double-click or pop-up context menu
  • Improves support for 'meta http refresh' type redirects
  • Improves soft 404 check - better at finding the target text in external pages and pages that are redirected by meta refresh
  • Fixes bug causing the count for bad links to increase when you re-check a bad link and it's still bad
  • Fixes some problems with ftp'ing of sitemap file after a scheduled scan - has been failing under certain circumstances

v5.9.8 released July 2015

  • Implements sorting in By page view
  • Fixes problems experienced in some locales after sorting the sites list by 'last checked' date

v5.9.7 released July 2015

  • Adds 'By status' view
  • the above allows sorting by any column and has a context menu to copy the link url, redir url or 'appears on' url
  • has an option button allowing you to group redirects by initial status, final status or the combination

v5.9.6 released June 2015

Some fixes and improvements to xml sitemap:

  • Allows editing of change frequency within the results table. Changes made are 'remembered' for future scans of the same site
  • Adds 'match' column to sitemap rules table (partial match or match whole string)

Other fixes / enhancements

  • better handling of base href. Now handles relative base hrefs and 'relative to root' ("/") properly

(v5.9.5)

  • Adds support for redirects by meta http refresh

v5.9.3 released June 2015

  • Expands summary text for SEO (was just numbers for pages without title or meta description). Now more comprehensive, listing counts for a number of SEO tests including images without alt text and thin content
  • Parameters for these tests are available as before under Preferences > SEO and also via a new 'Preferences' button beside the new summary text
  • A new button beside the summary text allows it to be copied to the clipboard. The text is selectable in case you only want to copy part of it
  • This additional information is included in the Summary Report and the email report (both of which are available via 'scan with actions' from the 'What do you want to do' screen). Note that the SEO table in the summary report only lists pages without title or description as before

v5.9.2 released June 2015

Some changes designed to help crawl very, very large websites:

  • The 'don't check external links' option now prevents Scrutiny from listing external links at all, thus reducing the data stored
  • A new option in Preferences to limit occurrences. For each occurrence of each link, a number of strings are stored (url of the page it appears on, link text and more) so using this option with a small number (minimum of one occurrence) will again reduce the amount of data stored

Other small enhancements:

  • Auto update improved, gives more feedback
  • Improves performance when 'soft 404' check is turned on. (better recognition of non-html files)

Fixes:

  • Fixes little issue with export preview, when exporting by link or by page, a second file was saved after switching back to flat view
  • Fixes bug with 're-check this link' - after using this option, only the final status code was shown, no redirect
  • 'Missing link url' links are now listed as internal, not external

v5.9.1 released June 2015

  • Fixes bug causing unexpected results if hreflang appears in links ahead of the href
  • couple small bugs fixed related to 'soft 404' check. If switched on with images switched on too, large amounts of messages could be written to the Console. If switched on, could cause hang at completion of crawl (which could be overcome by pressing 'pause' and then 'continue')
  • efficiency - large files of unknown mime type are assumed not to be html and not downloaded

v5.9 released June 2015

Many improvements related to sitemaps and in particular the .dot (graph) export (in readiness for SiteViz, a visualiser to display the sitemap):
  • Changes colours inserted into exported .dot file (which denote level) The previous colours, red, orange, yellow, grey, corresponded with the colours used within Scrutiny to denote warnings or errors, so were causing confusion. New colours in dot file are black, brown, yellow, grey
  • Adds option to override the canonical rule for the purposes of the dot file (for the sitemap results and xml sitemap, pages are not included if they have rel=canonical pointing to a different page. It may be helpful to see these pages in the .dot (visualisation) sitemap)
  • If crawl is limited by levels or number of links, changes logic slightly so that links which are only checked but not followed, are now checked for inclusion in the sitemap
  • Improves dot export, much quicker export
  • default option for exporting visualisation is a 'cleaned up' file (which doesn't include reciprocal and 'upward' links)
  • options in the .dot (visualisation) sitemap export dialog are remembered
  • Fixes bug preventing certain pages from being included in the sitemap
  • Fixes bug causing some occasional and incorrect colouring of entries in Scrutiny's sitemap results
Enhancements:
  • Spell checker now checks image alt text and page title
  • Keyword stuffing check includes image alt text and page title
  • Soft 404 function looks for terms in text content, not in whole page source
  • Efficiency improvements
Fixes:
  • Fixes bug - when data for a site is loaded in, the 'By page' view of Links still contained some pages from the previous data

v5.8.8 released May 2015

  • adds support for img srcset - all image urls are found checked and reported
  • img alt text is now found if it appears in the tag before the src or srcset

v5.8.7 released May 2015

  • adds dialog with a preview when user is exporting links to csv or html. Default is the flat view but all options are made clear in the dialog.
  • makes export from SEO table (SEO or HTML validation) much more efficient and faster
  • when reloading saved data, correctly removes previous data from the SEO and sitemap tables
  • Fixes problem with scanning with actions - validation wasn't being kicked off properly, leading to a hang when the scan finishes
  • Summary report now contains pages with missing title *or* missing description (previously only listed pages with both missing)

v5.8.6 released May 2015

  • Fixes bug which could cause a hang or crash if user chooses to 'check links within pdfs' and 'spelling and grammar' at the same time.

v5.8.5 released May 2015

  • Adds 'images' tab to SEO results (showing all occurrences of all images with alt text and host page) and 'images with no alt text' filter option to SEO table. This information has been available in the links table but hasn't been easy to find by those interested in the information for SEO reasons.
  • The new images table can also display a count of a keyword/phrase occurring in the alt text. (type keyword into search box as per the SEO table)
  • Fixes problem with window title when results are displayed
  • Now correctly unescapes entities in the canonical url

v5.8.4 released April 2015

  • Adds 'Thin content' filter to SEO results (a Panda factor). Based on word count for page, default is 250 words but this can be raised/lowered in Preferences > SEO

v5.8.3 released April 2015

  • Now link checks the canonical meta tag. (If canonical meta tag is present on the page being crawled and canonical column is switched on in Preferences > SEO)

v5.8.2 released April 2015

  • Fixes bug which was causing crawl to loop ad infinitum if #! appeared in the url
  • Fixes some instability (crashes) when using running js (rendering pages) option
  • A few small fixes relating to running js (rendering pages) before scanning. In particular, catches pages timing out under these settings, previously the crawl could apparently reach 100% but fail to display results

v5.8 released Feb 2015

Improves spelling results / workflow:
  • Adds 'by word' outline view, which lists possible misspellings, expandable to show each page that contains the word
  • Adds bulk 'learn' feature - select one or more words (hold down cmd to select multiple non-contiguous words) and press button 'Learn all selected'
  • The new view has context menus (right-click or ctrl-click) with options such as Learn, Copy url, Visit, Open spelling dialogue. Urls in this table can also be double-clicked to open the spelling dialogue to view occurrences of that word on that page in context.
  • The new table can be expanded and exported (after learning any false positives) for a csv which will be useful to clients.
  • Adds automatic update check (contains single-click download)

v5.7 released Jan 2015

  • Adds keyword density feature. Highlights pages which have any keyword appearing above a threshold set in Preferences. Double-click to see a full page analysis including one, two, three and four-word terms.
  • Adds 'Pages with duplicate descriptions' to SEO filter
  • Handles images where src = "data:...."
  • Fixes bug which was preventing some pages from appearing in the Sitemap / SEO tables if the links to that page are around images rather than text (a recent bug, not sure which version introduced it)

v5.6.4 released Jan 2015

  • Adds 'Pages with title too long' to SEO filter and a setting for the max number of characters to Preferences > SEO
  • Makes v5.6.x the main release (has been release candidate)

v5.6.3 Release Candidate released Dec 2014

  • Adds 'column' button to Links and SEO results tables
  • Adds 'Redirected' to Filter drop-down of links results window

v5.6.2 Release Candidate released Dec 2014

  • Alters csv exports slightly, row separators are now LF character (Unix-style) rather than CR, for easier parsing
  • Improves orphan pages search, remote directory listing choked slightly and retries connections that time out
  • Adds Autosave option, checkbox in preferences. Data is autosaved when crawl finishes and on exit. If autosave data exists, it's reloaded on startup.
  • Improves feedback to user via progress bar during orphan pages search or autosave

v5.6.1 Release Candidate released Dec 2014

  • Fixes bug which was preventing information about some images from appearing in the SEO table. Image alt text is now marked [img alt] rather than [img src] for clarity.

v5.6 Release Candidate released Dec 2014

  • Adds orphan pages check. This will only work for static sites where the server can be accessed by ftp/ftps. Scrutiny compares the files on the server with the urls obtained by crawling.
  • Improves parsing for spell checking, words separated by html tags and no whitespace are now separated for spellchecking.
  • Spell-checking better handles certain html entities which might legitimately appear in words; apostrophes and dashes.

v5.5.2 released Dec 2014

  • Fixes problem with character encoding Latin1 (charset = "ISO-8859-1") and adds support for ISO-8859-2 (the Latin 1 problem could have caused some 'unsupported url' errors on pages which specify charset = "ISO-8859-1")
  • Fixes potential problem with parsing headings in SEO check. This could cause extraneous information to be reported.
  • Fixes potential problem with csv export of SEO table where description or headings contain double-quotes.

v5.5.1 released Nov 2014

  • Adds new column to SEO table, Robots (robots meta tag). This displays noindex, nofollow as appropriate and as with other columns can be sorted. Column can be switched on/off in Preferences>SEO
  • Fixes bug preventing pages containing robots:noindex from being included in the SEO table. Such pages are excluded from the sitemap (if the checkbox in Preferences>Sitemap is checked) which is correct. But in that case they were also excluded from the SEO table which was not correct.

v5.5 released Nov 2014

  • Adds fields in Advanced Settings for field names and values to be added to the POST request when authenticating. This is necessary for sites which use authentication by web form and the form has hidden fields which are required for the authentication
  • Adds checkbox to Settings screen 'Wordpress or other SEO-friendly urls'. This needs to be checked when a url is in the form mysite.com/publications/all-publications/ where all-publications is a page not a directory. Without the checkbox checked, Scrutiny would regard /all-publications as a directory and limit its crawl to urls within and below that 'directory'.

v5.4.8 released Nov 2014

New features and enhancements relating to authentication and crawl limits
  • Fixes bug which prevented Scrutiny finding all urls within some xml files
  • Fixes crash experienced by some new users

v5.4.7 released Nov 2014

NB Some users experienced a crash while using this version. Please upgrade to 5.4.8

A number of enhancements relating to character encoding:
  • More character encodings added to the list of supported encodings. Adds Thai encodings (windows 874 and TIS-620), Japanese (Shift_JIS) and some Simplified Chinese (windows simplified chinese, HZ_GB_2312 and GB_2312-80)
  • Reads the 'charset' attribute of every page (previously a detection was performed on the first page and the encoding used for the whole site)
Other enhancements and fixes:
  • Adds selection button beside the User Agent String field, populated with a few common browsers
  • Fixes problem with link count in SEO table. If images were being checked, images were incorrectly being included in the link count
  • Fixes bug introduced in 5.4.5 which may prevent scanning from working properly on 10.6 and 10.7
  • Alters text on settings screen: 'Render...' rather than 'Run javascript...' but functionality unchanged
  • Adds option in Preferences to show / hide the thumbnail images in the sites table

v5.4.6 released Oct 2014

A number of enhancements and fixes to spelling and grammar checking:
  • Fixes random crash with larger websites when using the learn button
  • Fixes a problem with the export of the spelling / grammar results to csv or html
  • adds Preferences > Spelling > Remove pages from the list when reviewed - removes a page from the spelling list when you close the dialog (even if there are still spelling / grammar issues being flagged)
  • better handling of entities within page text (eg é)
  • detects web addresses (eg within link text) and doesn't present them as spelling errors

v5.4.5 released Oct 2014

  • Fixes problem with spell checking (separating text content from html) which was causing some page content to not be checked on certain pages
  • When a new site is created, language is set to user's preferred spell-checking language (previously unpredictable or the last-used spell-checking language)
  • Fixes bug which could lead to a 'hang' at the end of a scan and before results are displayed in rare cases
  • Adds a trap for the very unlikely scenario of a very large binary file being served up with incorrect mime type of "text/html" and prevents Scrutiny from trying to parse it for spelling / grammar / word count etc

v5.4.3 released Oct 2014

  • Improves scheduling system, fixes duplication of site in sites list when schedule is triggered, schedule uses settings saved in Scrutiny rather than as before, settings saved in the schedule file
  • Fixes bug causing SEO table not to appear in the Summary report (Note that the summary report only lists pages needing attention and the SEO table will only show pages with missing title or description tags. To generate the full SEO table following a scheduled scan or scan with actions, tick 'save SEO table as CSV')
  • Fixes bug which was causing spell-checking to (very occasionally) check in a language different from the selected language
  • Fixes problems editing the url of a website if it's edited at the top of the settings screen or the task screen
  • Fixes sitemap ftp dialogue appearing in mid-air if Scrutiny is in single-window mode

v5.4.2 released Oct 2014

  • Fixes bug which could cause automatically-exported file to be empty of data with some websites
  • Fixes bug which was affecting the accuracy of the link count on a page (SEO table and page inspector) and could cause it to incorrectly display zero for some urls

v5.4.1 released Sep 2014

  • Further improves handling of html entities - all known named and numbered entities handled, ascii and utf
  • Adds View > Partial results to View menu. Available after pausing crawl, displays partial links results for diagnostic purposes
  • Fixes bug that could cause external pages with querystrings to be duplicated in the links list when 'ignore querystrings' is checked

v5.4 released Sep 2014

  • Fixes bug which was preventing certain pages from being included in the Sitemap
  • Fixes bug which could cause hang before scanning for users in certain locations, related to system language and available spell-checking languages
  • Fixes problem with context menu in sitemap table and 'search pages' results. A context menu is available containing 'Copy url' and 'Visit page', along with cmd-C to copy the url of the selected item
  • Although not recommended in urls, support added for certain entities in the html such as ' (some named entities such as ' were being handled previously)
  • Switches to Paddle framework for licensing (existing keys should be picked up and recognised. For installations on new computers, the Paddle window won't accept the old-style key - contact support about a replacement key)

v5.3.1 released Aug 2014

  • Supports feed:// urls
  • Adds transparency / opacity to colour picker for highlighting (General>Display labels), allowing users not to see highlighting for redirects but just for 4xx and 5xx errors, for example.

v5.3 released Aug 2014

  • Adds ability to save and re-load data
  • NB The menu item 'Open...' (cmd-O) has long meant 'open a local copy of a website or list of links for testing'. To re-load saved data, use the menu item 'Load Data...' (cmd-L)
  • Fixes crash at launch for certain users since 5.2 possibly relating to missing languages
  • Sorts languages (for spell checking) into alphabetical order

v5.2.1 released Jul 2014

  • Fixes problems accessing urls within some framesets
  • Fixes problem with 'corrupt' Window menu. As a result full screen mode now behaves slightly differently if in 'many window' mode (Preferences > General > Results)
  • Resets progress bar and info fields before starting a new scan or re-scan, so user no longer sees a flash of the previous information

v5.2 released July 2014

  • New features:
    • Adds spelling and grammar checking
    • Adds customisable summary report (button existed previously but functionality missing)
    • Adds new tab and fields to Preferences window for creating header and styles for summary report
    • Adds last modified date to SEO table, optional column, on by default
  • Improvements and fixes:
    • Efficiency & speed improvements
    • After a scheduled scan, screen returns to Tasks ready for results to be viewed
    • Fixes page title not found if there were any other details in the title tag
    • Fixes problems with ftp'ing sitemap after scheduled scan
    • Corrects html issue with html links export

v5.1.2 released June 2014

  • When crawling html files locally, now adds filename if necessary. Default is 'index.html' but this can be set on a per-site basis via the Advanced settings window
  • Fixes bug that could cause Scrutiny to go into a loop when crawling a site locally & continue until reaching the preset maximum number of links
  • Now correctly records the 'redirect' url when scanning with 'Run javascript' switched on
  • Fixes hang or crash when sorting certain columns of page analysis table

v5.1.1 released June 2014

Enhancements

  • Uses shorter format for date stamp, easier to read, reduces column widths and file sizes of exports
  • External links no longer have querystring removed even if 'ignoring querystrings'. ie 'ignore querystrings' is only applied to internal links
  • Truncates the display version of urls (ellipsis in the middle) for HTML exports to avoid massively wide tables

Small fixes

  • Fixes bug causing hang when highlighting links including non-ascii characters
  • Fixes bug causing crash when highlighting links that include ../ where parent directory is unreachable

v5.1 released June 2014

Improvements and refinements to the UI
  • Displays results in main window, neater and more compatible with OSX full-screen mode
  • Option in Prefs to return to 5.0's 'many windows' behaviour (allows viewing of links / sitemap / SEO / Validation at the same time)
  • 'Previous and Next' navigation now operated by left and right arrow keys
  • Task list ("What do you want to do?") navigable by keyboard - up and down arrows tab through the tasks, enter selects the one that's highlighted
  • Adds signal beside tasks if they are 'Available without re-scanning'
  • Main window remembers its size and position
  • Websites monitoring list appears as a sheet of the main window rather than a separate window
New features and fixes
  • Adds 'Internal backlinks' column to the SEO table (switchable in Preferences > SEO)
  • Adds button to blacklist the crawl based on robots.txt (previously possible to exclude urls from sitemap based on robots.txt - that's still there as a separate option)
  • Fixes column sorting in validation table
  • Adds HTML export to validation table
  • Truncates the display version of urls (ellipsis in the middle) for HTML exports to avoid massively wide tables

v5.0.10

  • Fixes problems related to scrutiny starting on schedule - the bug could cause Scrutiny to quit or hang when the schedule kicked in, particularly if Scrutiny not running at the schedule time and has to start up
  • Changes the trial policy to allow proper testing - instead of a fixed number of scans, now allows 15 days free and unrestricted use
  • Resets the trial period so that anyone can trial this version even if they've tried a previous version

v5.0.9

  • adds html export to SEO table. Useful as links are 'clickable' in this format
  • adds options for sitemap html export, flat or hierarchical (see Preferences > Sitemap). Default is the new hierarchical option
  • improves csv and html export, these now reflect the sorting / filtering of the table being exported
  • fixes bug in sitemap html export that was including pages which have a canonical pointing to another page

v5.0.8

  • Better handles urls with port numbers (problems experienced with some servers re urls with a port number and returning a redirect)
  • Now allows starting url which includes non-ascii characters (although not in the domain, IDN's still unsupported)
  • Fixes problem of urls containing non-ascii characters occasionally being displayed with percent escapes
  • Fixes bug causing empty or hash link urls being reported all the time regardless of whether checkbox is checked
  • correctly handles links using ./ (same directory)
  • Fixes problem with adding new sites if list is sorted

v5.0.7

  • Fixes bug relating to page headings which could cause Scrutiny to hang with certain pages

v5.0.6 (beta) Released May 2014

  • Adds 'Action' button under 'Scan with Actions'
  • Correctly removes all temporary files when application quits. v4 and before had removed temporary files only when starting a new scan. previous points of v5 had not removed all files.

v5.0.5 (beta) Released May 2014

  • Adds sortable columns to sites list, can be sorted by name, url or last checked date
  • Better handles entities involving a hash (eg ') within a url. Previously was truncating the url at the hash assuming it to be a fragment/anchor
  • Fixes problem with non-ascii characters in img src
  • Fixes bug causing spurious text to be reported as the link text if an image has alt = ""
  • Adds character encoding tag to head of HTML exports
  • Handles html within a heading such as <strong> or <span> and reports the whole heading correctly

v5.0.4 (beta) Released May 2014

  • Fixes to non-ascii url support (introduced in 5.0.3). Note that this doesn't include non-ascii characters in domain name (IDN) which is as yet unsupported
  • Fixes problems finding headings where the heading tag contains a class <h1 class = "someclass">

v5.0.3 (beta) Released April 2014

  • Now supports urls which include non-ascii characters. Some may argue that this is against web standards, but it's becoming more common and accepted by Google and browsers
  • Fixes bug causing problems running on systems < 10.9

v5.0.2 (beta) Released April 2014

    [edit: there used to be a note here about Wix sites but that's now out of date. For the latest on Wix support, please contact support.]
  • When settings are changed, 'new scan' is automatically checked
  • Blacklisting / whitelisting is no longer applied to starting url. Previously, starting url had to pass black/whitelist test otherwise crawl wouldn't get past the first page
  • New option to include <lastmod> in xml sitemap. If this is checked, the last modified date for internal pages is logged (if the server gives it) and shown in the sitemap table

v5.0.1 (beta) Released April 2014

  • Auto-detects character encoding of pages, character encodings now supported include CP1251 (Cyrillic script eg Russian, Bulgarian, Serbian Cyrillic)
  • Adds headings and schedule column to sites table

v5 (beta) Released April 2014

v4.5.5 Released June 2014


(as part of ongoing support for v4 alongside development of v5)
  • Fixes and improvements:
    • Fixes column sorting in validation table
    • Improves page analysis, has improved UI and finds more page elements

 

v4.5.4 Released February 2014

  • Fixes and improvements:
    • Adds validation to sitemap ftp server field. Adds the ftp:// scheme if not present or replaces http:// with ftp:// (the reason for some support issues)
    • Reverses the sandboxing of v4.5.3 as there were some restrictions which couldn't be overcome in a sandboxed app

 

v4.5.3 Released February 2014

  • Fixes and improvements:
    • Handles 'callto:' links (already handled skype: and tel:) No longer reports them as bad links
    • Changes 'Highlight' button on SEO table to 'Filter' which which is more user-friendly
    • Adds 'Duplicate Titles' to SEO filter button
    • Adds licensee's name to the About box
    • Fixes problems with display of 'Last checked' information
    • Fixes problem occasionally experienced if the starting url (a web page or a text file) is incorrectly identified as a soft 404
    • Changes archive filename to year-month-day rather than day-month-year so that a folder of archives sorted by name will appear in chronological order
    • Fixes bug causing crash sometimes if Advanced settings window is closed using red button rather than OK button
    • Sandboxed and code-signed for your security

 

v4.5.2 Released December 2013

  • Fixes and improvements:
    • Recognises an xml sitemap file, File>Open and Scrutiny will test all of the links within it. (Has previously been able to test the links within a text file in plain text or html format)
    • Fixes problem which could lead to incorrect information in the 'occurrences' of a link where another url redirects to that url
    • The above fix will lead to slight differences in the results for some sites (a small increase in data). The new version should be more accurate
    • Integrity running on previous versions of OSX isn't tolerant to links which try to access a folder above the domain (eg foo.com/../somepage.html) due to changes in Mavericks, such links are reported as fine. from 4.5.2, Integrity traps such links and reports them as badly formed. Note that some developers consider such links fine because they are generally tolerated by browsers (they ignore the parent directory instruction) but they're technically incorrect and there are no plans for Scrutiny to have an option to tolerate them
    • Turns off some unnecessary console information and clears up some console warnings
    • Fixes problem of 'page analysis' testing wrong page if SEO table has been re-ordered by clicking column header
    • Fixes a bug which was causing some instability under certain circumstances and an occasional crash when clearing the results of one site and starting the crawl of another

 

v4.5.1 Released September 2013

Adds 'soft 404' support:
- highlights suspected soft 404s (where status code is 2xx but the intended page hasn't been found)
- You can customise this list to find soft 404s within your own site or add terms found in external soft 404s
- You can switch the feature off (in Preferences) if you have a large site and want best performance and this isn't important to you

Adds automatic update check:
- New dialog gives information about new versions when available with single click to download

Small fixes and improvements:
- Adds 'visit' button beside url field in link inspector

Change of policy with demo mode
- Allows a limited number of trial scans rather than a period of time

v4.4.1 Released September 13

Small fixes and improvements:
- Expandable views will only expand when crawl is paused or finished. Deferring the building of these views improves speed and efficiency
- Fixes bug preventing pages to be added to the sitemap if canonical link is given as a relative url

v4.4 Released September 13

Retina screen compatible
OSX Mavericks tested and supported

Main window's Toolbar redesigned in line with Apple's human interface guidelines and for retina screen compatibility
Adds toolbar controls (show / hide / customise) to main View menu

Minimum system requirements OSX 10.5. 10.4 users should not upgrade to v4.4, for compatibility with newer systems it uses features not available in 10.4
Small fixes and improvements:
- now indents data for expandable views when exported as csv, html

v4.3.1 Released August 13

Small fixes and improvements:

- Fixes problem with og:description being displayed in SEO table rather than meta description (if they are different and depending which comes first).
- Does not show pages in sitemap (& exported XML sitemap) if canonical link is present and points to a different page.
- Adds time and date to comments at top of XML sitemap

v4.3 Released July 13

Adds more highlighting options to SEO table:
- option to highlight pages with many links. Preference added so that you can choose the threshold, but default is set to one link per 100 words (can be changed in Preferences>SEO) - (this number comes from Matt Cutts of Google: http://www.mattcutts.com/blog/how-many-links-per-page )
- option to highlight too short / too long meta description. This is important because it is displayed on search engine results page (SERP). Defaults set to between 30 and 160 characters is ok, can be changed in Preferences>SEO

Adds link count as column to SEO table, sortable.
Also adds word count to SEO table, sortable, allowing user to find pages with small or large amounts of content, and to compare number of links with number of words. (Some guides give a number of links for your content, eg one link per 125 words). Currently Scrutiny doesn't display this latter - the calculation must be made in a spreadsheet with exported data.
Also adds canonical url to SEO table (sortable) and takes this information into account when highlighting duplicates (ie two pages aren't marked as duplicates if one contains a canonical url)
Adds these three new fields (links, words, canonical url) to the Page inspector (double-click an item in the SEO table)

Adds checkboxes to switch columns on or off in SEO table (Preferences>SEO)
Adds context menu to SEO table which includes Copy URL (command-C from keyboard), Visit and Get Info (command-I from keyboard)
When the starting url is edited, user is asked whether they'd like to edit the url for the current website configuration or whether they're intending to create a new configuration

Fixes problem with response times getting inflated if validation is running
Fixes bug related to new 'by link' outline view causing a crash sometimes after switching to another site and starting a new crawl
Views with switchable columns (via Prefs) now remember how the user has resized and repositioned them
Now correctly resets column sorting on all views when starting new crawl
Fixes two small and unrelated bugs causing odd results if nofollow switched off and base href present but set to ""

v4.2.2 Released June 13

Fixes problem with new link text column in By page view not always displaying accurate data where same link occurs multiple times on same page
Tweaks how information is displayed in new expandable By link view
Fixes crash on launch for some existing users
Filter button now works in flat view

v4.2 Released June 13

improvements to interface:
- Changes the 'by link' view to an expandable view, occurrences can be seen by expanding view rather than as previously having to open the link inspector
- Link inspector still appears on double-click from link views and is improved
- Adds context menus to the 'by link' and 'by page' views and the 'appears on' table in the link inspector - a number of actions can be performed with a right-click (or control-click) including 'Copy URL' and where appropriate Visit, Highlight and Locate
- The new Copy URL action is available with a command-C and will copy the URL of the selected item
- A new Locate action lists how to click through from the starting url to find the link in question. It is available via context menus, the link inspector and cmd-shift-L
- Adds 'link text' column to 'by page' view
- Change to wording: 'on page' now 'appears on'
- Changes default for highlighting a link on the page - now looks like highlighter pen rather than a box around it (changes prefs defaults to 'background' rather than 'border', and changes the default colour to yellow rather than dark grey - ie (existing users can select this option in prefs if they like))

Ignores and continues if 'bad SSL certificate' warning is encountered. But only for the website being tested. (anything else, ie external links, won't be followed anyway)
If image checking is switched on, now collects alt text and displays in 'link text' columns
Some options removed from Preferences>Views>By Link view (Status, URL, On Page) because these are needed for the new outline view to work properly
Exporting from 'by link' view better than previously. (was putting all 'on page' information in a single cell to reflect the view - led to problems due to Excel's 256-character limit)
Export added to by Page view
Exports from expandable views reflect the state of the view, ie which rows are expanded or not

v4.1.3 (Released May 13)

Improves authentication: allows you to input field names for websites which require login details to be sent by web form (eg Wordpress sites)
Remembers last-used filename and directory when saving sitemap xml file - details are remembered for each of your sites
Ignores and continues if 'bad SSL certificate' warning is encountered. But only for the website being tested. (anything else, ie external links, won't be followed anyway)
If a link just has a hash as the url, a hash character is displayed rather than the word 'hash' to avoid confusion
Improves speed of csv exports

v4.1 (Released April 13)

Now able to search for duplicates (same page with different urls)
Checks whether links are 'nofollow', displays this information in the link tables (switchable as per other columns) and adds option to prefs to 'not follow nofollow links'
Also checks for robots meta tag and whether nofollow present. If so the new 'don't follow nofollow links' will also apply to links on that page
Adds selector allowing choice of highlighting in SEO table; missing SEO parameters (as before) , possible duplicates or pages marked as nofollow
Double-click in SEO table now opens a new inspector showing SEO information including a list of possible duplicates
Scrutiny will check for the nofollow attribute - which is an overhead - if either of the columns are showing (Preferences > Views). So that you can see which links are 'nofollow' even if you've chosen not to not follow them. Hide both columns (which is the default global setting) if you don't need to know about this, then it won't slow the crawl down
Fixes page analysis not working properly
Checking for blacklist or whitelist terms is now case-insensitive, as you would probably expect
If flagging blacklisted urls, then the highlight colour used is orange or the warning colour (was red or bad link colour). Not an error so inappropriate to use an error colour
No longer includes 404 pages in the sitemap
Fixes problem of apparent duplicates in sitemap and SEO tables caused by two different link urls redirecting to the same url
Fixes bug preventing total image weight being shown in SEO table
More context help buttons

v4.0.4 (Released February 13)

Fixes problems creating black/whitelist rules on first run with no settings saved
Correctly sets window to edited (dirty spot in red button) when black/whitelist rules are changed, triggering prompt to save when switching settings

v4.0.3 (Released February 13)

Small fixes

v4 (Release Candidate January 13)

Major improvements to the engine and data storage meaning that even small sites will crawl more quickly and large sites will crawl very much more quickly without slowing down or losing responsiveness
When stop button is pressed, all open threads are abandoned, and then recreated if 'continue' is pressed. Gives a much better user experience.
Blacklist and whitelist boxes replaced by a more user-friendly table of rules (existing data will be preserved and presented in the new way)
Adds 'By page' links view. If 'bad links only' are showing, the view will show a list of pages requiring attention, expanding to show the bad links on that page.
Routines for 'by page' view re-written to avoid apparent hanging at the end of the crawl of a big site
Adds new settings to Preferences, allows setting of limits - default to 200,000 links. Offering the option of limiting the crawl of a large site (maybe better achieved by using blacklist / whitelist rules) but also a safety valve to prevent crashing due to running out of resources when crawling very large sites
If starting crawl within a directory, crawl is limited to that directory, ie crawl will go down a directory structure but not up. This matches users' expectations. Previously, crawl extended to all pages in the same domain.
Blacklist and whitelist boxes replaced by a more user-friendly table of rules (existing data will be presented in the new way)
Fixes inefficiencies in full report generation which were giving the impression of 'hanging' if full report generated for medium or large sites
Fixes problem with robots.txt if more than one user-agent is specified. Now will only use an exclusion list for user-agent = all (*) or Google (ie Scrutiny will respect the file as if it were Googlebot)
Moves 'check links on custom error pages' to settings rather than global preferences, and moves the 'labels' preferences to the View rather than General tab of the preferences window
Adds Help contents to help menu - links to manual index page
Increases maximum number of threads from 30 to 40 (will improve crawling for some sites) with the default now 12 rather than 7. Extreme left (labelled 'fewer') is still a single thread
Updated application icon
Resets the 30-day trial if you've used the trial with a previous version. There will be a price increase but existing licences will work with v4. This is a thank you to those who have bought in early
Change of price. Existing licences will work with the new version - a thank you to them for buying in early.

v3.2.2 Build 2 (Released January 13)

Problems with certificate resolved, supplied as an installer package (10.4 users will need 3.2.2 Build 1)

v3.2.2 (Released December 2012)

Fixes a bug when crawling multiple sites sometimes preventing them all from being crawled properly
Fixes a bug preventing 'delay' setting from being saved properly depending on localisation settings eg if decimal separator is ','
Prevents some unnecessary save dialogue

v3.2 (Released November 2012)

Adds 'filter' button for selecting out internal links or images
Fixes problem where some servers don't like 'referer' header field is sent with no value (ie starting url) and return 'bad request'
Corrects the Prefs window>Validation>suggested address for local instance of validator - adds slash at the end which seems to be necessary (clicking this address auto-completes the location field)
Fixes bugs with "Re-check bad links", one causing minor hang if used twice in a row.

v3.1 (Released October 2012)

Crawls multiple sites in one go - providing a list of links in plain text and using plain text mode is taken to be a list of sites, ie each is followed and crawled
(note that the total number of links / pages crawled must still be within the capacity of your computer to hold all the data, eg perhaps 100,000 links or 10,000 pages)
Page analysis tool now shows uncompressed and compressed size of files where gzip is being used by the server. So webmasters can easily see the benefit of their servers' gzip service and the actual 'transferred' weight.
'plain text mode' button fixed - state wasn't being saved with settings

v3.0.4

fixes bug causing instability on 10.4

v3.0.3

Small fixes including: fixes some relative urls being formed incorrectly
fixes double-click in links 'by link' view opening wrong item if search box has been used.

v3.0.2

Small fixes including: fixes hash trimming and trailing slash trimming (problem with the latter was leading to many apparent redirects)
clear and re-start only enabled when crawl is paused or finished
Fixes context help 'i' button for 'timeout' and 'delay'.

v3.0.1 (App Store) and v3.0 build 2 (web dist)

spelling mistake in prefs occurrences
fix page status not displaying
fixes links to new manual
fixes ua string defaulting
updates change freq - original values use rules, don't have to push button
Fixes progress indicator in AS dist
Fixes re-checking stacking statuses
At this point AS version has hi res document icon, Web has lo-res one

v3 (Released September 2012)

Able to schedule crawl via iCal with optional repeat. (Instructions added to manual.) CLI-minded people can use cron to do the same thing.
SEO keyword analysis searches content. This feature has to be switched on in preferences as it uses more of your computer's resources during the crawl
Adds sorting (by clicking on table headers) to all tables
Adds filtering and sorting to SEO view when a keyword or phrase is typed
Black / whitelists can apply to content as well as url (new checkbox below black / whitelist fields) -
Blacklisteded urls can be flagged (option in preferences)
Many internal changes making crawl slightly quicker and significantly more memory-efficient, larger sites can be crawled in one go
All statuses are shown for redirected links rather than just the final one
SEO table url column displays final url for redirected urls rather than the redirected one
New options in prefs for:
- checking content for seo keywords (must be checked before crawling site)
- flagging blacklisted or whitelisted urls (remember that you can blacklist or whitelist keywords in the content now too)
password field in Advanced settings is now a secure text field - hides the password from view
Adds Clear and Re-start to File menu
Fixes follow whitelist box not being saved with settings in web distribution
Fixes total image weight calculation in main SEO table
Measures in place to limit problems caused by extraneous and invisible characters entering the url field with a copy and paste

v2.0.1 (Released July 2012)

rename menu item blocked if crawl is running, and switches to list view if not already showing. (Previously appeared not to work if icon view was showing)
fixes redirected urls (3xx) not being highlighted yellow
fixes update frequency not being carried through to the xml sitemap from the rules table
if a url is redirected, sitemap table shows url as redirected rather than original link url, and exported xml sitemap similarly gives redirected url rather than original link url
maunal and help menu improved

v2: (Released July 2012)

Main new features

New page analysis tool
- load a page and its elements (images, .js files and .css files) noting the response time and load time for each element.
- It will give you a total, and you will be able to see where any problems lie
- can be used as a standalone tool or opened to analyse the currently-selected page. Document-based so that you can have more than one test open at the same time.

Keyword analysis
to count the occurrences of a word or phrase in url, title, meta description, meta keywords and main headings. Simply type the word into the search field above the list.

Greater control and better prioritisation in XML Sitemap
if you choose the 'Automatic' setting for priority, Scrutiny will mark your starting url as 1.0 and then calculate the others based on the number of clicks from the home page, and use a logarithmic scale. ie one click from first page = 0.5, two = 0.3, three = 0.2 with all other pages = 0.2
Further to this, you can set up some 'rules' to specify priority and update frequency for certain pages or sections of your site. You only need to enter a partial url. This way it is possible to specify a particular url or a section of the site, eg "/engineering/"
New export options including html sitemap and a full report
Can export a full report containing summaries and full lists for links, SEO and validation. You can save this in pdf format or html format. The links are 'clickable' in both formats.
The link views ('by link' and flat view) in v2 have a search box which searches the url, redirect url, status and link text, all case insensitive
User interface improvements; Starting url is visible from all tabs, along with the Go button which changes to pause / continue as appropriate, replacing the old toolbar pause / continue button, adds new tab for full report (prints from File>Print and 'print' toolbar item) , adds alternating background to all views, tabs have a pull-down menu rather than separate buttons for the export options, closing main window doesn't quit app, Window menu allows re-opening main window or page analysis windows.

If you are not a registered user, then the trial period is reset. So even if you've had 30 days' trial of v1, you can still try v2 for 30 days.

Other improvements and fixes

Splits the check for robots.txt and meta robots/noindex into two separate checkboxes as some users have wanted to use one but not the other
Links relative to scheme eg //domain.com (see http://www.ietf.org/rfc/rfc3986.txt section 4.2) previously handled ok but not if the page's base href was given in this format
Prevents switching to a new site while crawl is running, which was affecting the crawl in previous versions
Removes good colour from Preferences (to allow for stripey views)
Fixes a stability issue when the validator is crawling and there are a large number of unauthorised / redirected urls
Fixes 'new settings' not clearing 'last checked' status
Fixes a bug causing a bit of a hang if 'robots.txt' is set to be respected but crawling the site locally.

Version 1.6.3

released June 2012

Adds support for telephone links such as tel: and skype: (now recognised and skipped rather than reported as an error)
Fixes bug relating crawling local sites introduced in 3.8.4
Fixes problem with crawling local sites if they are stored in the root Library folder
Fixes bug causing special characters such as ü, ö, ä in page title or link text being altered to u, o, a when exported. All exports (.dot, .csv, .tdl, .html) now export using utf-8 character encoding. Note that in line with web standards (RFC 1738) Integrity and Scrutiny don't support non-ascii characters in urls
Fixes bug causing 'Recheck broken links' to give strange results or appear to hang if validation is set to crawl all pages

Version 1.6.2

released May 2012

Traps and highlights a certain kind of recursion
A fix and improvements re secure (https:) links. The problem could cause hanging or crashes in certain circumstances
Fixes problem with thread counting, faster crawling

Version 1.6.1

released May 2012

Adds 'Response time' as a column to the SEO table
Fixes bug affecting checking of broken images where image has src = "" and improved handling of empty quotes if that option is switched on
Fixes spurious text appearing in 'Link text' for links on images where the images alt = '' (empty string)
Fixes bug preventing proper construction of urls where base href = "/"
Improves submission of username and password for sites requiring authentication
Fixes problem of crawl or 'recheck broken links' not always finishing properly
Fixes potential crash under certain circumstances (involving redirect, url having trailing slash and settings set to ignore trailing slashes)

Version 1.6

released April 2012

Adds check for robots.txt and noindex in the meta data (this feature is off by default, switched on in preferences). When crawling, All links are followed and checked regardless, but if a page is marked as 'noindex' in the robots meta tag or disallowed in the robots.txt file, it will not be included in the sitemap, SEO or validation checks. robots.txt must have a lowercase filename and be constructed as shown at http://www.robotstxt.org/robotstxt.html
Indicates progress via application icon in dock
Adds image count to SEO table, shows number and weight of images on page (only those linked from html, not those linked from css). For this feature to work, 'check for broken images' must be checked in settings. (I believe that Google takes load time into account while Bing does not.)
Adds totals for 'no description' and 'no title' to SEO tab
Default link check timeout shortened to 30s
Fixes bug preventing images from being found if 'src' doesn't follow 'img' in the html
Fixes bug causing broken images to spuriously appear in Sitemap and other tables
Fixes bug causing number of html validation errors to sometimes incorrectly show as 0
Two versions now maintained, one built for distribution via web (10.4 - 10.7 supported) and one certified and built for distribution via App Store (10.5 to 10.latest supported). The latter will have a .1 at the end of the version number in the About box, eg 1.6.0.1 is the App Store version.
Note that if you download and buy via the web it is not possible to upgrade via the App Store and vice versa
App Store version has Lion features such as full-screen mode

Version 1.5

released April 2012

Removes 'generating flat view' progress bar. This job is now done much more quickly and in the background
Adds columns to validator tab, number of errors and number of warnings
Adds 'Export as CSV' button to toolbar
Adds ability to export sitemap and validator results as csv, from menu, toolbar or button on relevant tab
Fixes comma or trailing comma in blacklist fields preventing proper crawl
Adds switch in preferences to ignore trim leading or trailing spaces or mismatched quotes from a url
When crawling locally fixes 'file is directory' being included in bad links
Some fixes to the 're-check bad links'. (Was causing crash sometimes since last release)
Highlighting link on page feature is switchable between highlighting and simply visiting page
Validator will list all pages but only check the starting page. Checking the whole list as before can be switched on in preferences, but note that the public validator will only check a certain number of pages in succession, even with the 1 second delay that they ask for. This is the reason for this change. The Integrity and Scrutiny FAQs page gives details of installing the w3c validator locally which should allow full and rapid checking.
Fixes problem of throbber sometimes continuing to turn when crawl or re-check has finished

Version 1.4

released March 2012

Adds username and password fields to advanced settings window. If using authentication, Scrutiny will attempt to send these credentials if challenged by the server. If details are sent and then rejected by the server, a message to that effect will be sent to the Console.
Adds 'Ignore trailing slash' button to settings, can be set per site, set to 'yes' by default
Fixes a problem preventing crawling of pages if braces { } are present in the url
When crawling local files, directories are not reported as an error (as long as the directory exists)
Options for sitemap update frequency 'daily', 'weekly', 'monthly' etc altered to lowercase for compliance with the sitemap standard
'Customize' added to toolbar (although this has been dropped by Apple from Lion 10.7 onwards so will only appear in 10.4 -> 10.6)
Updated application icon and removal of non-standard buttons on the various tabs. Adds several buttons to the customisable toolbar

Version 1.3.4

released February 2012

New feature - option to ftp sitemap file to server after generating it. Server and authentication details are saved with the config for each site. Some related options added to Preferences
Sends referrer header field for every request (other than the starting url) - this fixes a very small number of odd bugs
'Open local file' is added to the File menu. Functionality to crawl a site locally or import a list of links did exist in previous versions and was documented, but wasn't very accessible as it relied on a drag and drop into the starting url field (which still works and is to be improved in a future version)
Fixes bug preventing links to w3c being checked properly
Fixes a small memory leak
Clears data from flat link view before starting a new crawl
Fixes bug preventing crawl from finishing properly if user tries to highlight link on page before link has been checked
Fixes bug preventing date stamp from being written properly every time
Improves re-check broken links - now correctly uses as many threads as are set in settings and fixes problem preventing it from finishing every time
Clarifies number of links checked (x of y)

Version 1.3.3

released end December 2011

SEO and Validation can be disabled in global prefs for better performance if not needed
Allows setting of delay and timeout for Validation (in global prefs)
Links to subdomains can be considered as internal rather than external. ie peacockmedia.software and www.peacockmedia.software are considered the same site (which is not necessarily true but most people would expect) and therefore both are followed. Adds checkbox in global preferences to switch this option. Default is on. With the option on, Integrity will discover more links (and potentially more bad links) on certain websites. Option needs to be switched off if you wish to deliberately limit your crawl to one subdomain
Fixes problem with 'Re-check broken links' button
Fixes problem with exporting links if 'bad links only' are showing
If unregistered, registration window was nagging. This was unintentional and has been switched off, should now only show on startup and after 3 days

Version 1.3.2

released November 2011

Exports .dot file (standard format used by graphing applications) which can be opened as a visualisation in third-party graphing apps. includes colour to indicate levels. Accessed via File>Export or a new toolbar button added by 'Customize toolbar...'
Allows crawling of some sites requiring authentication. Log in using Safari and check the box in advanced settings. Must be used with caution and with proper backups.
Adds advanced settings; authentication and custom header fields
Removes distance column from links tables. Shown in Sitemap table where it's more appropriate
Adds 'Getting started' to Help menu (online help to be improved shortly)
Fixes problems with 'Re-check broken links' and 'Re-check this link'
Fixes 'on page as title / url' preference
Fixes glitch with 'Inspect selected' button when flat view is showing

Version 1.3.1

released October 2011

Fixes bug preventing proper crawling of local files
Now handles UTF characters in meta keywords / description
Fixes bug preventing page title from showing if it contains UTF characters
Fixes 'Inspect selected' button on flat sortable link view
On pressing 'Go' for the second time, previous results are cleared immediately
File>New takes you back to the settings tab if not already in view

Version 1.3

released October 2011

Requires licence key, activation panel shows at startup with option to continue and use application.
Trial period set for 30 days

Version 1.2 (Beta)

released September 2011

Compatible with 10.4 / ppc upwards
Compromises Lion full-screen mode
Adds 'file size' column to SEO table
Fixes print button - fits visible table to page width and landscapes page
Fixes problems with 'export csv' and 'export html' buttons
Fixes problem of user not being able to get main window open again if closed
Fixes bug causing base href not to be discovered which could lead to many improperly-constructed relative urls

Version 1.1 (Beta)

released September 2011

Fixes titles and descriptions not showing properly if carriage returns present between tags
Continue button greys properly on open and when crawl finishes
adds meta keywords to SEO table
adds url column to SEO table

Version 0.1 (Beta)

released August 2011

Uses tried and tested website crawling engine from Integrity
adds SEO parameters: meta description, title and headings
adds improvements to sitemap generation
adds html validity check with configurable url for validator (allowing for local instance)
adds full-screen mode and improved interface