[H]ow to convert *int64 to *int32?
You cannot.
Use var i32 int32 = int32(*x); t2(&i32); *x=int64(i32).
go - how to convert *int64 to *int32? - Stack Overflow
Int overflow in tokenJSON.ExpirationTime (casting int64 to int32)
Converting int32 to uint64 and back to int32
Behavior of converting uint32 --> int on 32-bit vs 64-bit systems
[H]ow to convert *int64 to *int32?
You cannot.
Use var i32 int32 = int32(*x); t2(&i32); *x=int64(i32).
Now, Golang has generic types which can help you to have a generic converting function:
type IntNumber interface {
int | int8 | int16 | int32 | int64
}
func IntNumberPointersToPointers(T IntNumber, K IntNumber)(from *T) *K {
if from == nil {
return nil
}
result := K(*from)
return &result
}
You can call the function by:
// The result will be int
input := int32(100)
result := util.IntNumberPointersToPointersint32, int
Go playground: https://go.dev/play/p/sRP17iFX6J6
Assume that you are aware about converting big int to smaller may cause data loss. E.g int64 to int32
I am a bit concerned about a piece of code I wrote. It seems to work but I did not quite expect it to...
How does type conversion exactly work in Go when I convert a int32 to uint64 and then back to int32. I was somewhat scared that funny business might happen.
To give practical context of what I am doing. I have data like:
type data struct {
a uint16
b uint8
c uint8
d int32
}
Instead of using a data struct I want to package all the values in a single uint64 like:
data := uint64(a) | uint64(b)<<16 | uint64(c)<<24 | uint64(d)<<32
and then unpack it like (shifting back and applying a bit mask if necessary):
d := int32(data>>32)
my worries were around:
data = uint64(int32(d))
I was worried that it would try to do some conversion from int32 with negative values. But running some tests all it seems to be doing is padding or truncating bits and interpreting the two's compliment int values as uint.
Is this all safe and correct or am I about to run into nightmarish bugs?
Edit: In essence what I am asking with go type conversions does this hold for every value of d of type int32 :
d == int32(uint64(d))
From what I understand, the size of the int type in Go is platform dependent.
From the docs (tour):
The int, uint, and uintptr types are usually 32 bits wide on 32-bit systems and 64 bits wide on 64-bit systems. When you need an integer value you should use int unless you have a specific reason to use a sized or unsigned integer type.
So, on a 32-bit system, if I did something like this:
var myUint32 uint32 = 4294967295 // max positive value for uint32 myInt := int(myUint32)
what would happen to myInt on a 32-bit system? I can't seem to find docs for this. Would the bitwise value remain the same in memory (and using a bit as a sign bit)? Or would there be truncation of some sort? I am unsure. I also don't have a quick way to test this out.
backstory, I have code in review that was suggested to be written like this:
func myFunc (a, b uint32) int {
return int(a) - int(b)
}I have a really bad feeling about this, especially since our code planned to be open source. There is no guarantee what architecture this will run on. But I need solid evidence argue we shouldn't do this.
I have a field in my data which is small enough to simply use an int32. However, from what I can tell, int32 is generally used when representing a code point or something quite specific, rather than using it simply to save 32 bits of storage. Also, some of the stdlib packages (like strconv) deal exclusively with int64 rather than int32. This has meant changes to my code I didn't expect purely to handle int32 things.
What is your opinion (and perhaps the general consensus) on using int32 purely for efficiency purposes? Is it just worth using an int64 for simplicity?
For example, to detect 32-bit integer overflow for addition,
package main
import (
"errors"
"fmt"
"math"
)
var ErrOverflow = errors.New("integer overflow")
func Add32(left, right int32) (int32, error) {
if right > 0 {
if left > math.MaxInt32-right {
return 0, ErrOverflow
}
} else {
if left < math.MinInt32-right {
return 0, ErrOverflow
}
}
return left + right, nil
}
func main() {
var a, b int32 = 2147483327, 2147483327
c, err := Add32(a, b)
if err != nil {
// handle overflow
fmt.Println(err, a, b, c)
}
}
Output:
integer overflow 2147483327 2147483327 0
For 32 bit integers, the standard way is as you said, to cast to 64 bit, then down size again [1]:
package main
func add32(x, y int32) (int32, int32) {
sum64 := int64(x) + int64(y)
return x + y, int32(sum64 >> 31)
}
func main() {
{
s, c := add32(2147483646, 1)
println(s == 2147483647, c == 0)
}
{
s, c := add32(2147483647, 1)
println(s == -2147483648, c == 1)
}
}
However if you don't like that, you can use some bit operations [2]:
func add32(x, y int32) (int32, int32) {
sum := x + y
return sum, x & y | (x | y) &^ sum >> 30
}
- https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L368-L373
- https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L380-L387
You can use the generics approach to have compile-time guarantees about what types are accepted — with a small modification to make it more broadly useful — but that won't help when targeting different architectures. For that, you can use build constraints so that the appropriate version of your function is compiled into the executable.
As an example, build for 32-bit linux:
//go:build linux && 386
func ConvertUintT ~uint | ~uint32 uint {
return uint(x)
}
or build for 64-bit MacOS with Apple chip:
//go:build darwin && arm64
func ConvertUintT ~uint | ~uint32 | ~uint64 uint {
return uint(x)
}
I used random build tags as an example. Your actual build conditions could be more complex than that. You can see which are Go's supported distributions, and have an idea of what build tags you might use with:
go tool dist list
When building with go build, the toolchain will pick up the GOOS and GOARCH environment variables and build the appropriate file. Otherwise you can pass the tags explicitly to the build command as shown in this question.
I don't know of a very clean approach, but if there potentially a bunch of different types that you want to allow to be cast to uint, then instead of defining a separate function for each such cast, another option is to define an interface with the appropriate set of types, and then define a generic function that's defined only for types in that set:
type SafeToConvertToUInt interface {
uint | uint32 | uint64
}
func ToUIntT SafeToConvertToUInt uint {
return uint(x)
}
[playground link]