K 10 svn:author V 3 mav K 8 svn:date V 27 2022-06-30T01:16:42.292708Z K 7 svn:log V 986 amd64: Stop using REP MOVSB for backward memmove()s. Enhanced REP MOVSB feature of CPUs starting from Ivy Bridge makes REP MOVSB the fastest way to copy memory in most of cases. However Intel Optimization Reference Manual says: "setting the DF to force REP MOVSB to copy bytes from high towards low addresses will expe- rience significant performance degradation". Measurements on Intel Cascade Lake and Alder Lake, same as on AMD Zen3 show that it can drop throughput to as low as 2.5-3.5GB/s, comparing to ~10-30GB/s of REP MOVSQ or hand-rolled loop, used for non-ERMS CPUs. This patch keeps ERMS use for forward ordered memory copies, but removes it for backward overlapped moves where it does not work. This is just a cosmetic sync with kernel, since libc does not use ERMS at this time. Reviewed by: mjg MFC after: 2 weeks (cherry picked from commit f22068d91bf53696ee13a69685e809d35776ec3f) Git Hash: efd76157eff3c8f710df2ed9571d02f17729ff74 Git Author: mav@FreeBSD.org END