You need to specify the radix. There's an overload of Integer#parseInt() which allows you to.
int foo = Integer.parseInt("1001", 2);
Answer from Matt Ball on Stack OverflowVideos
You need to specify the radix. There's an overload of Integer#parseInt() which allows you to.
int foo = Integer.parseInt("1001", 2);
This might work:
public int binaryToInteger(String binary) {
char[] numbers = binary.toCharArray();
int result = 0;
for(int i=numbers.length - 1; i>=0; i--)
if(numbers[i]=='1')
result += Math.pow(2, (numbers.length-i - 1));
return result;
}
Hi, Im trying to code a program that switches bewteen int, string, binary , etc.. I know how to convert string to int and viceversa but how can you convert a binary number like "110" to binary?? Also, is it recommendable to convert it directly from string to int decimal or first convert the string to int binary and then this int to int decimal??
atoi doesn't handle binary numbers, it just interprets them as big decimal numbers. Your problem is that it's too high and you get an integer overflow due to it being interpreted as decimal number.
The solution would be to use stoi, stol or stoll that got added to string in C++11. Call them like
int i = std::stoi("01000101", nullptr, 2);
- The returned value is the converted
intvalue. - The first argument is the
std::stringyou want to convert. - The second is a
size_t *where it'll save the index of the first non digit character. - The third is an
intthat corresponds to the base that'll be used for conversion..
For information on the functions look at its cppreference page.
Note that there are also pre C++11 functions with nearly the same name, as example: strtol compared to the C++11 stol.
They do work for different bases too, but they don't do the error handling in the same way (they especially lack when no conversion could be done on the given string at all e.g trying to convert "hello" to a string) and you should probably prefer the C++11 versions.
To make my point, passing "Hello" to both strtol and the C++11 stol would lead to:
strtolreturns0and doesn't give you any way to identify it as error,stolfrom C++11 throwsstd::invalid_argumentand indicates that something is wrong.
Falsely interpreting something like "Hello" as integers might lead to bugs and should be avoided in my opinion.
But for completeness sake a link to its cppreference page too.
It sounds like you should be using strtol() with 2 as the last argument.
As explained above, Integer.toBinaryString() converts ~0 and ~1 to unsigned int so they will exceed Integer.MAX_VALUE.
You could use long to parse and convert back to int as below.
int base = 2;
for (Integer num : new Integer[] {~0, ~1}) {
String binaryString = Integer.toBinaryString(num);
Long decimal = Long.parseLong(binaryString, base);
System.out.println("INPUT=" + binaryString + " decimal=" + decimal.intValue()) ;
}
From http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Integer.html#toBinaryString(int) : the toBinaryString() method converts its input into the binary representation of the "unsigned integer value is the argument plus 232 if the argument is negative".
From http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Integer.html#parseInt(java.lang.String,%20int) : the parseInt() method throws NumberFormatException if "The value represented by the string is not a value of type int".
Note that both ~0 and ~1 are negative (-1 and -2 respectively), so will be converted to the binary representations of 232-1 and 232-2 respectively, neither of which can be represented in a value of type int, so causing the NumberFormatException that you are seeing.
So this was involved in an interview question I was practicing with. The way I see it, there are three different ways to do this: ("N" refers to the number to convert below)
Make a blank string variable which will contain the string. Continually do a bitwise AND with the number and 1 and right shift the number by 1 after each and. This will continually extract the rightmost bit (LSB) in the integer. AFter extracting that bit value, convert it to its string value by doing some ascii math/whatever, and then prepend it on to the string. The issue I see with this is that prepending to a string can be inefficient (This was why I asked about prepending vs appending to a string in one of my earlier posts). The runtime efficiency of this would be O(#of bits in number * time to prepend to string), so I guess something like O(logN * N) ? Assuming prepending is O(N)
Do the same thing as number 1, except instead of prepending to the string, append to the string and then reverse it at the end. This would also be O(NlogN), assuming appending is O(1) and reversing is O(N). However, if appending could potentially be O(N) because the string could be stored in an array which has a finite capacity and can fill up, then that would make it O(N2 logN)? I'm not too sure about my big O analysis here so I was wondering if someone could vouch for the correctness of this.
Figure out the number of bits contained in the number by doing floor(log2(N)) + 1 (we could count each bit individually, but I'm assuming the logarithm is more efficient? I don't know the big O runtime of taking a logarithm though - Lets call this value "numBits"). Once we know that, make a bitmask by doing 1 << numBits. Then, continually do bitwise AND with the number and the mask, and that will extract the LSB. Convert it to the string equivalent with ascii math/whatever the fuck, and then append it to the string. Then, shift the mask 1 to the right. This method doesn't require a O(N) reversal operation or a O(N) prepending operation. If we assume appending to the string is O(1), then this algorithm's runtime would be proportional to the number of bits contained in the number, so it would be O(logN), which makes it the best of the three. I guess if we assume appending is O(N) however, then it would be O(NlogN), wouldn't it? This also depends on the runtime efficiency of taking a logarithm, and I am not sure if that can be considered constant time. Im assuming O(time to take logarithm) < O(logN) though.
Which of these would you recommend I use if I was asked to "write a function to convert an integer to a binary encoded string"?