K 10 svn:author V 3 mjg K 8 svn:date V 27 2018-09-06T19:42:40.157361Z K 7 svn:log V 440 amd64: depessimize copyinstr_smap The stac/clac combo around each byte copy is causing a measurable slowdown in benchmarks. Do it only before and after all data is copied. While here reorder the code to avoid a forward branch in the common case. Note the copying loop (originating from copyinstr) is avoidably slow and will be fixed later. Reviewed by: kib Approved by: re (gjb) Differential Revision: https://reviews.freebsd.org/D17063 END