I would do this using strconv.FormatUint:
import "strconv"
var u uint32 = 17
var s = strconv.FormatUint(uint64(u), 10)
// "17"
Note that the expected parameter is uint64, so you have to cast your uint32 first. There is no specific FormatUint32 function.
I would do this using strconv.FormatUint:
import "strconv"
var u uint32 = 17
var s = strconv.FormatUint(uint64(u), 10)
// "17"
Note that the expected parameter is uint64, so you have to cast your uint32 first. There is no specific FormatUint32 function.
I would simply use Sprintf or even just Sprint:
var n uint32 = 42
str := fmt.Sprint(n)
println(str)
Go is strongly typed. Casting a number directly to a string would not make sense. Think about C where string are char * which is a pointer to the first letter of the string terminated by \0. Casting a number to a string would result in having the first letter pointer to the address of the number, which does not make sense. This is why you need to "actively" convert.
I'm a Dead Rising 3 modder, and the way the game stores strings is in UInt32 formats. This is also how the game stores its animation ID's, which don't actually seem to correlate to anything (Example, the animation "player_attack_heavymetal_heavy_spin" converted to UInt32 is "2350023456", while its actual animation ID is "950460626")
This means that if the animation isn't referenced anywhere else in the code besides the animation file itself, I can't use it. So, I've been trying to reverse engineer the strings, but I haven't gotten any luck, just getting 4 illegible characters. Does anyone have a way to help? Is it impossible?
I tried the following:
package main
import "fmt"
func main() {
n := uint8(3)
fmt.Println(string(n)) // prints nothing
fmt.Println(fmt.Sprintf("%v", n)) // prints 3
}Playground: https://go.dev/play/p/OGE5u8SuAy2
Confused why casting uint8 to string appears to do nothing. But fmt.Sprintf works.
Another option is to use a solution from Oraclize https://github.com/oraclize/ethereum-api/blob/master/oraclizeAPI_0.5.sol, it suits best for me:
0.5 Compiler Version:
function uint2str(uint _i) internal pure returns (string memory _uintAsString) {
if (_i == 0) {
return "0";
}
uint j = _i;
uint len;
while (j != 0) {
len++;
j /= 10;
}
bytes memory bstr = new bytes(len);
uint k = len - 1;
while (_i != 0) {
bstr[k--] = byte(uint8(48 + _i % 10));
_i /= 10;
}
return string(bstr);
}
Pre 0.5 Compiler Version:
function uint2str(uint i) internal pure returns (string){
if (i == 0) return "0";
uint j = i;
uint length;
while (j != 0){
length++;
j /= 10;
}
bytes memory bstr = new bytes(length);
uint k = length - 1;
while (i != 0){
bstr[k--] = byte(48 + i % 10);
i /= 10;
}
return string(bstr);
}
You can convert the uint to bytes32 by using bytes32 data = bytes32(u) (uint is same uint256 How to convert a uint256 type integer into a bytes32?)
Then use How to convert a bytes32 to string:
function bytes32ToString (bytes32 data) returns (string) {
bytes memory bytesString = new bytes(32);
for (uint j=0; j<32; j++) {
byte char = byte(bytes32(uint(data) * 2 ** (8 * j)));
if (char != 0) {
bytesString[j] = char;
}
}
return string(bytesString);
}
Hello,
I am wondering about a data conversion from uint32_t to char*.
There is a 32-bit register: *(UID_REG_ADDRESS) This register stores 4 ASCII characters.
Now I am wondering what is the safest solution to print it:
typedef union
{
uint32_t u;
char c[sizeof(uint32_t)];
} conv_t; // maybe it should be packed
...
conv_t data = { .u = *(UID_REG_ADDRESS) };
printf("%.4s\n", data.u);or like that:
printf("%.4s\n", (char*)UID_REG_ADDRESS);The first one seems to be safe, however the second looks more readable (and produces less code). Both of them work well on my machine, but I wonder how it would be on another platform. For example: would it be possible that next characters are read with the different offset (32 bits)?
At first I tought that the shorter code will not work.