Suppose that it is necessary to compute reciprocal or reciprocal square root for packed floating point data. Both can easily be done by:

```
__m128 recip_float4_ieee(__m128 x) { return _mm_div_ps(_mm_set1_ps(1.0f), x); }
__m128 rsqrt_float4_ieee(__m128 x) { return _mm_div_ps(_mm_set1_ps(1.0f), _mm_sqrt_ps(x)); }
```

This works perfectly well but slow: according to the guide, they take 14 and 28 cycles on Sandy Bridge (throughput). Corresponding AVX versions take almost the same time on Haswell.

On the other hand, the following versions can be used instead:

```
__m128 recip_float4_half(__m128 x) { return _mm_rcp_ps(x); }
__m128 rsqrt_float4_half(__m128 x) { return _mm_rsqrt_ps(x); }
```

They take only one or two cycles of time (throughput), giving major performance boost. However, they are VERY approximate: they produce result with relative error less than 1.5 * 2^-12. Given that machine epsilon of single precision float numbers is 2^−24, we can say that this approximation has approximately *half* precision.

It seems that Newton-Raphson iteration can be added to produce a result with *single* precision (perhaps not as exact as IEEE standard requires, through), see GCC, ICC, discussions at LLVM. Theoretically, the same method can be used for double precision values, producing *half* or *single* or *double* precision.

I'm interested in working implementations of this approach for both float and double data types and for all (half, single, double) precisions. Handling special cases (division by zero, sqrt(-1), inf/nan and the like) is not necessary. Also, it is not clear for me which of these routines would be faster than trivial IEEE-compilant solutions, and which would be slower.

Here is a few minor constraints on answers, please:

- Use intrinsics in your code samples. Assembly is compiler-dependent, so less useful.
- Use similar naming convention for functions.
- Implement routines taking single SSE/AVX register containing densely packed float/double values as input. If there is considerable performance boost, you can also post routines taking several registers as input (two regs may be viable).
- Do not post both SSE/AVX versions if they are absolutely equal up to changing
**_mm**to**_mm256**and vice versa.

Any performance estimates, measurements, discussions are welcome.

## SUMMARY

Here are the versions for single-precision float numbers with one NR iteration:

```
__m128 recip_float4_single(__m128 x) {
__m128 res = _mm_rcp_ps(x);
__m128 muls = _mm_mul_ps(x, _mm_mul_ps(res, res));
return res = _mm_sub_ps(_mm_add_ps(res, res), muls);
}
__m128 rsqrt_float4_single(__m128 x) {
__m128 three = _mm_set1_ps(3.0f), half = _mm_set1_ps(0.5f);
__m128 res = _mm_rsqrt_ps(x);
__m128 muls = _mm_mul_ps(_mm_mul_ps(x, res), res);
return res = _mm_mul_ps(_mm_mul_ps(half, res), _mm_sub_ps(three, muls));
}
```

Answer given by Peter Cordes explains how to create other versions, and contains a thorough theoretical performance analysis.

**You can find all the implemented solutions with benchmark here:** recip_rsqrt_benchmark.

The obtained throughput results on Ivy Bridge are presented below. Only single-register SSE implementations have been benchmarked. Time spent is given in cycles per call. First number is for half precision (no NR), second is for single precision (1 NR iteration), third is for 2 NR iterations.

*recip*on*float*takes**1, 4**cycles versus**7**cycles.*rsqrt*on*float*takes**1, 6**cycles versus**14**cycles.*recip*on*double*takes**3, 6, 9**cycles versus**14**cycles.*rsqrt*on*double*takes**3, 8, 13**cycles versus**28**cycles.

Warning: I had to round raw results creatively...