You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I pulled DocC into our project our linter pointed me a small optimization in PeakMemory.swift, namely to fold a filter and first into first(where:) (Who knew swiftlint did that too?)
Motivation
It makes a micro-difference when running on an automation in a Linux image
Importance
A very low priority
Alternatives Considered
The only alternative I see is to leave the code as-is
The text was updated successfully, but these errors were encountered:
I think "micro difference" is overstating the impact of this 😉. The combined filter and first lines only take up 0.1 % of the time to compute this value.
If we wanted to address the performance of this code we should avoid splitting all the lines (which is where ~80% of the time is spent) and find the "VmPeak" line in a way that avoids new allocations.
That said, this metric is only computed once during a full documentation build and doesn't even register in a profile so the performance of this function isn't a problem.
Feature Name
Slightly more optimal peak memory metric on Linux
Description
When I pulled DocC into our project our linter pointed me a small optimization in
PeakMemory.swift
, namely to fold afilter
andfirst
intofirst(where:)
(Who knew swiftlint did that too?)Motivation
It makes a micro-difference when running on an automation in a Linux image
Importance
A very low priority
Alternatives Considered
The only alternative I see is to leave the code as-is
The text was updated successfully, but these errors were encountered: