The textbook you are looking for is
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are:
uint32 set of all unsigned 32-bit integers (0 to 4294967295) uint64 set of all unsigned 64-bit integers (0 to 18446744073709551615) int32 set of all signed 32-bit integers (-2147483648 to 2147483647) int64 set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)There is also a set of predeclared numeric types with implementation-specific sizes:
uint either 32 or 64 bits int same size as uint
Check the size of type int. On the Go Playground it's 4 bytes or 32 bits.
package main
import (
"fmt"
"runtime"
"unsafe"
)
func main() {
fmt.Println("arch", runtime.GOARCH)
fmt.Println("int", unsafe.Sizeof(int(0)))
}
Playground: https://play.golang.org/p/2A6ODvhb1Dx
Output (Playground):
arch amd64p32
int 4
Run the program in your (LeetCode) environment. It's likely 8 bytes or 64 bits.
For example, in my environment,
Output (Local):
arch amd64
int 8
Here are some fixes to your code,
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Println(runtime.GOARCH)
fmt.Printf("%v\n", singleNumber([]int{-2, -2, 1, 1, -3, 1, -3, -3, -4, -2}))
}
func singleNumber(nums []int) int {
sum := make([]int, 64)
for _, v := range nums {
for i := range sum {
sum[i] += 1 & (v >> uint(i))
}
}
res := 0
for k, v := range sum {
if (v % 3) != 0 {
res |= (v % 3) << uint(k)
}
}
fmt.Printf("res %+v\n", res)
return res
}
Playground: https://play.golang.org/p/kaoSuesu2Oj
Output (Playground):
amd64p32
res -4
-4
Output (Local):
amd64
res -4
-4
Answer from peterSO on Stack OverflowI have the following function, it works flawlessly
func (r *system) Quantity() int32 {
// r.info.Quantity is type int
return int32(r.info.Quantity)
}Now I'm trying to cast it to *int32
func (r *system) Quantity() *int32 {
// r.info.Quantity is type int
return int32(&r.info.Quantity)
}
But i get the following error cannot use &r.info.Quantity (type *int) as type *int32 in return argument, and of course it seems logic...is there a way to achieve this?
numeric - Go: convert big.Int to regular integer (int, int32, int64) - Stack Overflow
go - int vs int32 return value - Stack Overflow
How to convert an int64 to int in Go? - Stack Overflow
Int vs Int8, int 32
You convert them with a type "conversion"
var a int
var b int64
int64(a) < b
When comparing values, you always want to convert the smaller type to the larger. Converting the other way will possibly truncate the value:
var x int32 = 0
var y int64 = math.MaxInt32 + 1 // y == 2147483648
if x < int32(y) {
// this evaluates to false, because int32(y) is -2147483648
Or in your case to convert the maxInt int64 value to an int, you could use
for a := 2; a < int(maxInt); a++ {
which would fail to execute correctly if maxInt overflows the max value of the int type on your system.
I came here because of the title, "How to convert an int64 to int in Go?". The answer is,
int(int64Var)
This is possibly a stupid question, but I come from JS and i just started learning GO. Now as you may know in JS everything is Number but in GO there are int, int8, 32, 64. So, I was wandering should i get in habit of using specific int type or just use int in 99% of cases? Whatโs the advantage and when do you guys use it in real life if you do?
I am a bit concerned about a piece of code I wrote. It seems to work but I did not quite expect it to...
How does type conversion exactly work in Go when I convert a int32 to uint64 and then back to int32. I was somewhat scared that funny business might happen.
To give practical context of what I am doing. I have data like:
type data struct {
a uint16
b uint8
c uint8
d int32
}
Instead of using a data struct I want to package all the values in a single uint64 like:
data := uint64(a) | uint64(b)<<16 | uint64(c)<<24 | uint64(d)<<32
and then unpack it like (shifting back and applying a bit mask if necessary):
d := int32(data>>32)
my worries were around:
data = uint64(int32(d))
I was worried that it would try to do some conversion from int32 with negative values. But running some tests all it seems to be doing is padding or truncating bits and interpreting the two's compliment int values as uint.
Is this all safe and correct or am I about to run into nightmarish bugs?
Edit: In essence what I am asking with go type conversions does this hold for every value of d of type int32 :
d == int32(uint64(d))
Simply Use the int() cast function
The Go Programming Language Specification
Conversions
Conversions are expressions of the form
T(x)whereTis a type andxis an expression that can be converted to typeT.
For example,
size := binary.BigEndian.Uint32(b[4:])
n, err := rdr.Discard(int(size))
From what I understand, the size of the int type in Go is platform dependent.
From the docs (tour):
The int, uint, and uintptr types are usually 32 bits wide on 32-bit systems and 64 bits wide on 64-bit systems. When you need an integer value you should use int unless you have a specific reason to use a sized or unsigned integer type.
So, on a 32-bit system, if I did something like this:
var myUint32 uint32 = 4294967295 // max positive value for uint32 myInt := int(myUint32)
what would happen to myInt on a 32-bit system? I can't seem to find docs for this. Would the bitwise value remain the same in memory (and using a bit as a sign bit)? Or would there be truncation of some sort? I am unsure. I also don't have a quick way to test this out.
backstory, I have code in review that was suggested to be written like this:
func myFunc (a, b uint32) int {
return int(a) - int(b)
}I have a really bad feeling about this, especially since our code planned to be open source. There is no guarantee what architecture this will run on. But I need solid evidence argue we shouldn't do this.
tbh, this raises confusing if you were programming in java or cpp
https://golang.org/pkg/builtin/#int
int is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.
Surprised no one mentioned that int usually has 64 bits on 64-bit systems and 32 bits on 32-bit systems. The distinction from both int32 and int64 is clear.
I have a field in my data which is small enough to simply use an int32. However, from what I can tell, int32 is generally used when representing a code point or something quite specific, rather than using it simply to save 32 bits of storage. Also, some of the stdlib packages (like strconv) deal exclusively with int64 rather than int32. This has meant changes to my code I didn't expect purely to handle int32 things.
What is your opinion (and perhaps the general consensus) on using int32 purely for efficiency purposes? Is it just worth using an int64 for simplicity?