Qwerty.55 wrote:
Kllrnohj wrote:
Qwerty.55 wrote:
At the most basic level, one can program in the machine's native language of Binary (or Hexadecimal, if you prefer the easy way out).


Binary and hexadacimal are *NOT* languages, you *CANNOT* program in them. They are merely how data is stored.


Even you subscribe to the unrealistic definition put forth by most dictionaries that a computer language must be designed, then Machine code is still a language by the fact that the microprocessor opcodes are designed into the microprocessor.
Quote:
Assembly is just an easier way of writing machine code - binary doesn't enter the picture. Assembly is not an abstraction of machine code, it *IS* machine code, just in human-readable form instead of CPU-readable form.


I think your computer will disagree when you try to feed it ASCII ASM source as an executable file.
You don't you run it through a compiler first. Just because you have to do some translation there is a one to one translation of those assembly mnemonics into binary so it is the same thing. Since it is a one to one translation it is not an abstraction but just a different way of representing it. You are just manipulating those same instructions in a less intuitive way.
Quote:
Quote:
Why on earth do you keep writing "ASM/Hex"? That doesn't mean anything, it is just assembly - no hex.


Since I program in pure hex, I beg to differ.
No you program in ASCII representations of hexadecimal that your hex editor turns into the binary file. The hex editor is acting as your compiler in this case and you are making things harder on yourself by doing it this way. It would be just as easy to make an editor that allows you to type in assembly mnemonics and outputs a binary from it, though having them as separate programs makes so much more sense and that is why it is done that way.

Quote:
Quote:
Not true at all. Almost all high level languages allow dropping to native code if you so desire.


Good luck running that inline ASM from the Python interpreter in any way that wouldn't send a normal Python programmer running.

There are ways to call C libraries from Python and C can have inline assembly, done.
Quote:
Quote:

Quote:
However, contrary to popular belief, it does NOT increase portability. Compiled programs are turned into machine code, so unless they're compiled on-site by the user (often not the case), they are no more portable than an appropriately written ASM/Hex program.


Bwahahaha, you are on crack. C is absolutely more portable. You are only talking about binary portability and are dismissing code portability, but code portability is *HUGELY* important. Look at the Linux kernel - its code portability is off the charts, which is why it's used in bloody everything from embedded microcontrollers, to smartphones, to multi-billion dollar server farms.


And...? Personally, I write every assembly routine out in pseudo-code first. That pseudo-code works in almost every text editor ever invented, which means that my code is thus more portable than C.

If you'll actually bother to read what I wrote, you'll notice that I specifically addressed your problem in the original statement.

Um what? Pseudo-code has nothing to do with this argument be cause it cannot be made to run on anything, it isn't code. If your ASCII representation of hexadecimal representations of machine opcodes was so great to program with why even bother with pseudo code, oh yeah because higher level programming languages abstract things so that the code is quicker to write and easy to understand.
Quote:
Quote:
Also not true. Compiled C programs can absolutely end up faster than assembly programs by nature of the compiler optimizations that just aren't feasible when coding by hand. For example, inlining functions or using arch-specific optimizations (if you think all x86 CPUs are the same in terms of supported op codes, you are horribly mistaken). C code isn't any less memory efficient, either, as it has no overhead. If your C programs use more memory than your assembly ones, it's because you screwed up the code, not because C uses more memory. Hence why C is suitable to embedded and kernel development, because it needs no memory itself.


Um, inlining functions isn't feasible to do by hand? Did I miss a memo or something? Please tell me you're joking.

Also, if you'll examine the OP in question carefully, you'll notice that I actually described the difference between compilers and interpreters, which would imply that I know that C has "no memory overhead." In other words, that wasn't the inefficiency I was talking about.


Quote:

EDIT: Oh, I also forgot to mention that optimized assembly actually varies between CPU manufacturer and CPU models. So fast AMD assembly can absolutely end up being slow Intel assembly - despite both being x86. Compilers can compensate for that by compiling code for specific CPU vendor models, hand coded assembly can't.


Oh, so that's why almost every piece of software I download has a different download for AMD and Intel chips, as well as each OS supported therein?
Because the compilers are smart enough to know what is supported everywhere and what is generally fast on all processors and you can tell that to target specific things. For instance Arch Linux compiles it's packages targeting the i686 platform, while the Debian Linux Distrobution targets i386 meaning it can be run on older processors. The drawback is that Arch packages can take advantage of newer and more efficient instructions while Debian can be run on a wider variety of older systems.

There is a reason the Gentoo Linux Distribution was so popular for a while, it allows people to compile their software for their specific CPU and chipset allowing the compiler to make speed optimizations that cannot be done if the code has to be run efficiently on many different processors.

Quote:

Quote:

Quote:
The programmer is also not required to have an intimate knowledge of many common algorithms because they are provided through routines in the language itself.


Not true at all. Most languages have *very* few functions built into the language itself. You are confusing "common library" with "language". C, for example, really doesn't come with anything whatsoever.


Again, those libraries are part of the language, even if not in the official specification. How many of the C coders out theredo you think could re-write any of the standard libs they use so often and have the product turn out as well as the original?
Look at ION and DoorsCS on the calculators, would you want every calculator programmer to have to rewrite iRandom or iPutSprite for every game. There is a reason that librarys exist and its a good thing.

As for C library's there is example code or pseudo code available for all of the standard C library routines so I'm sure most programmers could come up with something that would work. But anything past the standard libraries were written by other programers and I have seen in the past if you don't know how a library works or how you could accomplish what it does yourself you will most likely have a hard time using it to its full extent. Just because Java provides many data structures in its standard libraries doesn't mean one can use them effectively or even know when best to use which one without knowing how to implement those data structures themselves.
Qwerty.55 wrote:
Kllrnohj wrote:
Binary and hexadacimal are *NOT* languages, you *CANNOT* program in them. They are merely how data is stored.


Even you subscribe to the unrealistic definition put forth by most dictionaries that a computer language must be designed, then Machine code is still a language by the fact that the microprocessor opcodes are designed into the microprocessor.


Your response has nothing to do with what what I said in that quote - did you miss-quote there or something?

Quote:
I think your computer will disagree when you try to feed it ASCII ASM source as an executable file.


Uh, go re-read what I said there.

Quote:
Since I program in pure hex, I beg to differ.


Beg all you want, you're still wrong. You don't program in hex, you program in machine code. Hex is merely the representation.

Quote:
Again, I never said you couldn't. Hex merely makes some optimizations easier to see.


Not really. It might to you, but in general it certainly isn't.

Quote:
And...? Personally, I write every assembly routine out in pseudo-code first. That pseudo-code works in almost every text editor ever invented, which means that my code is thus more portable than C.

If you'll actually bother to read what I wrote, you'll notice that I specifically addressed your problem in the original statement.


Your addressing of that problem was simply "la la la code portability is dumb nobody needs it cause I said so" - it was not an actual argument, and the stupidity of it is staggering.


Code:
#include <stdio.h>

int main() {
    printf("Hello, World!\n");
    return 0;
}


See that little snippet? That can compile exactly as is for hundreds of platforms and dozens of architectures. It is *impossible* to do the same in assembly. This is *extremely* important, you cannot shrug off code portability as if it doesn't matter.

Also, C code can be opened in every text editor, too - so even with your incredibly ridiculous notion that "pseudo-code == source portability", you still don't end up with *more* portable. Rolling Eyes

Quote:
Um, inlining functions isn't feasible to do by hand? Did I miss a memo or something? Please tell me you're joking.


No, it is not feasible. It is possible, it is not feasible. Hint, go look up the definition of "feasible"

But feel free to prove me wrong - go download the source of, let's say, webkit (a cross-platform, cross-architecture library) and inline all the functions that should be inlined (oh, but don't forget to allow for both inlined and not - some people care about the resulting binary size. Oh, and whether or not a function should be inlined can depend on the architecture, so you'll need to make multiple versions for at least x86, x86_64, and armv5). I'll be nice, you can even just inline the C, you don't need to convert it to assembly first.

Quote:
Also, if you'll examine the OP in question carefully, you'll notice that I actually described the difference between compilers and interpreters, which would imply that I know that C has "no memory overhead." In other words, that wasn't the inefficiency I was talking about.


Uh, then what memory inefficiency were you talking about?


Quote:
Oh, so that's why almost every piece of software I download has a different download for AMD and Intel chips, as well as each OS supported therein?


Actually, yes - at least partially. Do they support multiple OSes? No, that's retarded and you suck at reading. Do they have different code paths for AMD and Intel chips? YES! Do they have different code paths depending on supported CPU features (for example, x86 has SSE, SSE2, SSE3, SSSE3, SSE4, MMX, etc..., or ARM has ARM7, ARM9, ARM11, NEON, etc...)? YES! Can they have entirely different instructions sets (say, x86 and PowerPC) in the same binary? YES!

Most don't, sure (it also doesn't matter for most), but some absolutely do. For some compilers, generating multiple optimized paths for specific models is merely a compile flag.

Then there are some languages that are compiled on the device it gets run on, which can then leverage the specific CPU and model it is running on to the fullest - JITs do this all the time, but there are some specialty languages that static compile when loaded. Actually, this is a cornerstone of modern 3D graphics - OpenGL and DirectX shader languages do exactly this - statically compile for the GPU you are running when loaded (speaking of which - it is actually impossible to program modern GPUs in assembly, yet you *can* program for them in C)

Quote:
I'd love to be corrected about Python on the Apollo 11 computers though.


Very few people code for a computer as resource constrained as the one on the Apollo 11 - so you're just being stubborn for the sake of being stubborn on that one.

Quote:
Again, those libraries are part of the language, even if not in the official specification. How many of the C coders out theredo you think could re-write any of the standard libs they use so often and have the product turn out as well as the original?


They aren't part of the language, no matter how many times you claim otherwise. A small army of embedded developers happily work without access to the C standard library. Hell, Android doesn't even ship with a full C standard library.

Also, the bad to mediocre programmers tend to be the ones that ignore the standard libraries the most in favor of crappily re-implementing it because they don't know what is in the standard library.

KermMartian wrote:
It depends on what kind of code it is. A calculator program written in C could be trivially cross-compiled on a whole bunch of platforms without any source changes, since each platform's compiler would turn it into the proper assembly variant, and a calculator doesn't really use any platform-dependent features. A C program that interfaces with hardware is not going to be so trivial to compile cross-platform.


Sure, not *all* C programs are source-portable (an easier example would be those that assume endianess than ones that interface with hardware), but assembly isn't source-portable at all - so C is infinitely more source portable :p

Quote:
You just lost all credibility with that statement. You're saying that Java can be up to 50*10*10 = 50,000 times faster to write than ASM? So if it takes me 5 minutes to write a simple program in Java, that it will take 250,000 minutes = 173 SOLID DAYS of coding to write the same thing in ASM with no sleep or breaks? Obviously you're resorting to massive hyperbole, and there's no point trying to address your trollpoint.


Well, of course it was hyperbole, but the sentiment is far from wrong and in some cases it isn't wrong at all. Assembly gets drastically slower to work with the larger the code base, and can't leverage as many libraries. For example, Java can leverage things like Tomcat to get an HTML template library and a simple database access that really would take years to re-create in assembly. Or take .NET, where object mapping to a SQL database takes 1-2 minutes, tops (thanks to a healthy mix of tools, the framework, and language features).

EDIT: And don't forget about the debugging part of coding. For C you compile with debug flags and have the compiler insert a bunch of debug info into your program to allow for symbols at such that you really can't in assembly. Java even lets you evaluate arbitrary java expressions while stopped at a breakpoint.
Kllrnohj wrote:
Well, of course it was hyperbole,
Then why did you say "rough estimates" and "*AT LEAST*"? Sounds to me like you're rethinking your original overstatements.
Quote:
but the sentiment is far from wrong and in some cases it isn't wrong at all.
I just gave you a great example of how it's wildly wrong. If a 5-minute program takes 173 days of solid, sleepless coding, then even a simple Python program that takes an hour to put together would be 6 years of ceaseless ASM coding, or in other words IMPOSSIBLE. Clearly that's not the case.
Quote:
Assembly gets drastically slower to work with the larger the code base, and can't leverage as many libraries.
It can leverage as many libraries are built to plug into it. Cf. Doors CS.
Quote:
For example, Java can leverage things like Tomcat to get an HTML template library and a simple database access that really would take years to re-create in assembly.
If you're using ASM for HTML parsing, then you deserve pain anyway.
Quote:
Or take .NET, where object mapping to a SQL database takes 1-2 minutes, tops (thanks to a healthy mix of tools, the framework, and language features).
Last I checked there's a .NET embedded library at least partially in assembly.

Quote:
EDIT: And don't forget about the debugging part of coding. For C you compile with debug flags and have the compiler insert a bunch of debug info into your program to allow for symbols at such that you really can't in assembly. Java even lets you evaluate arbitrary java expressions while stopped at a breakpoint.
And in a debugging emulator, you can arbitrarily change registers and flags and memory and even code. You can set breakpoints and examine stuff. You can even execute on real hardware and do tricks with output devices or even dumping a log to memory.
Quote:
If you're using ASM for HTML parsing, then you deserve pain anyway.

On the other hand, it might actually encourage you to set up a proper state-machine for it instead of trying to hack one together from handy string manipulation functions.
elfprince13 wrote:
Quote:
If you're using ASM for HTML parsing, then you deserve pain anyway.

On the other hand, it might actually encourage you to set up a proper state-machine for it instead of trying to hack one together from handy string manipulation functions.
Indeed, and it would probably be able to parse well-formed, correct HTML in a tiny code footprint. It would probably handle errors less well, but I think it would be quite decent, now that you mention it.
Quote:
Sure, not *all* C programs are source-portable (an easier example would be those that assume endianess than ones that interface with hardware), but assembly isn't source-portable at all - so C is infinitely more source portable :p


That depends on what type of machine you're writing for. I can write *Assembly* code for a virtual machine and have it run on a thousand different platforms.

However, even if you're doing hardware ASM, it's not terribly difficult to port most programs provided that the processor type doesn't change too much. As an example, we've gone through three different probable processors for the Prizm. I haven't had to change my source for a single one of them. If you consider processors in the same family to be cheating, then consider that you can run the binaries for a C program on pretty much any [normal] Windows machine. That means the pretty much the same is possible with ASM source.
...

Quote:
See that little snippet? That can compile exactly as is for hundreds of platforms and dozens of architectures. It is *impossible* to do the same in assembly.



Code:
    .class public HelloWorld
    .super java/lang/Object

    ;
    ; standard initializer (calls java.lang.Object's initializer)
    ;
    .method public <init>()V
       aload_0
       invokenonvirtual java/lang/Object/<init>()V
       return
    .end method

    ;
    ; main() - prints out Hello World
    ;
    .method public static main([Ljava/lang/String;)V
       .limit stack 2   ; up to two items can be pushed

       ; push System.out onto the stack
       getstatic java/lang/System/out Ljava/io/PrintStream;

       ; push a string onto the stack
       ldc "Hello World!"

       ; call the PrintStream.println() method.
       invokevirtual java/io/PrintStream/println(Ljava/lang/String;)V

       ; done
       return
    .end method


There's your portable hello world. That'll assemble for thousands of platforms and hundreds of architectures. Again, Assembly isn't limited to hardware.

PS: It's Jasmin, which assembles to Java Bytecode.
Qwerty.55 wrote:
There's your portable hello world. That'll assemble for thousands of platforms and hundreds of architectures. Again, Assembly isn't limited to hardware.

PS: It's Jasmin, which assembles to Java Bytecode.


Java bytecode is most certainly not the same thing as machine code: bytecode (Java or any other sort) is executed by an interpreter. Machine code is executed directly by the CPU.
Quote:
That means the pretty much the same is possible with ASM source.

Sure, if you feel like catering to the lowest common denominator. With C you can provide several different binaries for different x86 architectures that cater to the capabilities of different machines without doing anything but changing the compilation flags.

TC01 wrote:
Qwerty.55 wrote:
There's your portable hello world. That'll assemble for thousands of platforms and hundreds of architectures. Again, Assembly isn't limited to hardware.

PS: It's Jasmin, which assembles to Java Bytecode.


Java bytecode is most certainly not the same thing as machine code: bytecode (Java or any other sort) is executed by an interpreter. Machine code is executed directly by the CPU.

Java Bytecode is Machine code for the Java Virtual Machine. This is much different from how, say, Python bytecode is executed.
TC01 wrote:
Java bytecode is most certainly not the same thing as machine code: bytecode (Java or any other sort) is executed by an interpreter. Machine code is executed directly by the CPU.
And as Elfprince13 correctly stated, Java bytecode is executed by the Java Virtual Machine. The thing that gets interpreted is the on-the-fly conversion to bytecode.
Qwerty.55 wrote:
That depends on what type of machine you're writing for. I can write *Assembly* code for a virtual machine and have it run on a thousand different platforms.


Nope, it actually only runs on one platform - which would be whatever the VM is.

Quote:
However, even if you're doing hardware ASM, it's not terribly difficult to port most programs provided that the processor type doesn't change too much. As an example, we've gone through three different probable processors for the Prizm. I haven't had to change my source for a single one of them. If you consider processors in the same family to be cheating, then consider that you can run the binaries for a C program on pretty much any [normal] Windows machine. That means the pretty much the same is possible with ASM source.


There is no standard for assembly, so you can't even be source-portable on the same platform. The hello world I wrote will compile with both VC and mingw on Windows - yet you cannot write an assembly program that will assemble by both. So not only are you tied to a specific instruction set, you are tied to a specific assembler.

Quote:
There's your portable hello world. That'll assemble for thousands of platforms and hundreds of architectures. Again, Assembly isn't limited to hardware.

PS: It's Jasmin, which assembles to Java Bytecode.


No, it actually only compiles for 1 platform and 1 architecture using 1 assembler. The only thing that can assemble that is Jasmin, and the only thing that can run it is the JVM. You cannot compile that for x86 - not source portable.

KermMartian wrote:
Then why did you say "rough estimates" and "*AT LEAST*"? Sounds to me like you're rethinking your original overstatements.


Not re-thinking it at all, actually.

Quote:
I just gave you a great example of how it's wildly wrong. If a 5-minute program takes 173 days of solid, sleepless coding, then even a simple Python program that takes an hour to put together would be 6 years of ceaseless ASM coding, or in other words IMPOSSIBLE. Clearly that's not the case.


Yes, and I've already given an example where it's completely plausible, what's your point?

Also, FYI, you've screwed up the math pretty majorly. An hour spent coding Python would be 1*50*10 = 500 hours = ~21 days, not 6 years (big lul wut on that btw - figured you of all people would know how to use a calculator). Likewise, a 5 minute Python program would be ~40 hours, not 173 days. Of course, that's also taking the upper bound, I said 5-10x for Java vs. C, not 10x.

Quote:
It can leverage as many libraries are built to plug into it. Cf. Doors CS.


Yes and no. There are plenty of libraries that aren't native code in the first place (good luck calling java or python from assembly).

Quote:
If you're using ASM for HTML parsing, then you deserve pain anyway.


Someone doesn't know J2EE Razz

None of that had anything to do with parsing HTML, btw. That was more or less the Java equivalent of PHP - as in, to serve up web pages (or to do web services with WSDL+SOAP).

Quote:
Last I checked there's a .NET embedded library at least partially in assembly.


Yes, and if you use it you are coding in .NET, not assembly... (it's also pretty limited, even more so than .NET CF)

Quote:
And in a debugging emulator, you can arbitrarily change registers and flags and memory and even code. You can set breakpoints and examine stuff. You can even execute on real hardware and do tricks with output devices or even dumping a log to memory.


I didn't say it's impossible to debug assembly (there are actually some really good assembly debuggers out there - hugely important for reverse engineering efforts).
Kllrnohj wrote:
Quote:
I just gave you a great example of how it's wildly wrong. If a 5-minute program takes 173 days of solid, sleepless coding, then even a simple Python program that takes an hour to put together would be 6 years of ceaseless ASM coding, or in other words IMPOSSIBLE. Clearly that's not the case.


Yes, and I've already given an example where it's completely plausible, what's your point?

Also, FYI, you've screwed up the math pretty majorly. An hour spent coding Java would be 1*50*10 = 500 hours = ~21 days, not 6 years (big lul wut on that btw - figured you of all people would know how to use a calculator). Likewise, a 5 minute Python program would be ~40 hours, not 173 days. Of course, that's also taking the upper bound, I said 5-10x for Java vs. C, not 10x.
Is it really necessary to be such a jerk when someone is wrong? Yeah, call him out on it, but "big lul wut on that btw - figured you of all people would know how to use a calculator" is just being mean for the sake of being mean.
Kllrnohj, you originally said:
Quote:
Assembly is *at least* 50 times slower to write than C, which in itself is about 5-10 times slower than, say, Java, which is then 5-10 times slower to write than Python (all very rough estimates, of course)
I said that a 5 minute Python program by your estimates would take 173 solid days of coding. 5*50*5*5 = 6250 minutes on the low end, or 5*50*10*10 = 25000 minutes on the high end, or between 4.3 and 17.4 days of solid coding. I was off by a factor of ten. Regardless, your numbers are ridiculously stupid, if we're devolving into ad hominem attacks.
KermMartian wrote:
I said that a 5 minute Python program by your estimates would take 173 solid days of coding. 5*50*5*5 = 6250 minutes on the low end, or 5*50*10*10 = 25000 minutes on the high end, or between 4.3 and 17.4 days of solid coding. I was off by a factor of ten. Regardless, your numbers are ridiculously stupid, if we're devolving into ad hominem attacks.


No, you said a 5 minute *Java* program would be 173 solid days of coding:
KermMartian wrote:
You're saying that Java can be up to 50*10*10 = 50,000 times faster to write than ASM? So if it takes me 5 minutes to write a simple program in Java, that it will take 250,000 minutes = 173 SOLID DAYS of coding to write the same thing in ASM with no sleep or breaks?


Also, thanks for re-quoting me, note the bit on the end "all very rough estimates, of course".

@merthsoft: Yes, yes it is.
"Very rough estimate" doesn't mean "off by three or four orders of magnitude," it means off by at most a factor of three or four.
KermMartian wrote:
"Very rough estimate" doesn't mean "off by three or four orders of magnitude," it means off by at most a factor of three or four.


And I'm no where near off by three or four orders of magnitude, seeing as we are only talking about 2 orders of magnitude in the first place.
Kllrnohj wrote:
KermMartian wrote:
"Very rough estimate" doesn't mean "off by three or four orders of magnitude," it means off by at most a factor of three or four.


And I'm no where near off by three or four orders of magnitude, seeing as we are only talking about 2 orders of magnitude in the first place.
You're still trying to convince me that it would take ONE HUNDRED AND FOUR HOURS to write a simple application in ASM that would take 5 minutes in Python.
KermMartian wrote:
You're still trying to convince me that it would take ONE HUNDRED AND FOUR HOURS to write a simple application in ASM that would take 5 minutes in Python.


Kind of. When you near minimal programming time the time differences get messed up because it doesn't have a chance to average out and the complexity of the application is drastically lower. But for example:


Code:
def power(a, b):
    return a ** b
print("2^64 = {0}".format(power(2, 64)))


That took my about 20 seconds to write. Realize just what that actually contains.

1) Big number support - the math works for numbers of any size (particularly note the seamless transition from int to big numbers as well)
2) String formatting without specifying type (which requires run time typing and for data to format itself)
3) Variable argument functions
4) Multi-platform.

Good luck getting an assembly version of that in less than week.

But aside from that, you keep focusing on "simple application" - simple applications by definition are simple. Of course the time to create simple applications is less. "Hello, World!", for example, certainly doesn't follow the differences I provided.

On the other hand, a multithreaded HTTP server with dynamic loading of modules to handle certain paths (as in, if you go to server/index.py, the server (re)loads index.py and executes it, passing in the request as a parameter) is a ~20 minute job in Python (that would be 20 minutes including debugging and testing). Would it take a month to do that in assembly? Maybe, maybe not. Would it take more than a day? Absolutely. More than a week? Probably.

Oh, and also keep in mind that assembly on modern OSes is pretty much completely undocumented.

EDIT: But I'm really not sure why you think I'm trying to convince you of the numbers I stated specifically, I'm definitely not. In the initial statement I said "all very rough estimates of course". Then when you called it out as hyperbole I immediately agreed - why are you still fixated on the specific numbers?
Well, now you're just proving my points for me; I love arguing with you. Laughing
Quote:
1) Big number support - the math works for numbers of any size (particularly note the seamless transition from int to big numbers as well)
2) String formatting without specifying type (which requires run time typing and for data to format itself)
As you so helpfully proved, a library must at least be available for this sort of thing that defines the data structures and functions for manipulating things like big numbers and string formatting. If such a library exists, that it's pretty trivial in any language including assembly to use it. You think that the library could be re-written in Python within that twenty seconds? You're making an utterly fallacious argument by comparing one language with libraries assumed pre-made on one hand, and another language with libraries assumed non-existent on the other hand.
KermMartian wrote:
As you so helpfully proved, a library must at least be available for this sort of thing that defines the data structures and functions for manipulating things like big numbers and string formatting. If such a library exists, that it's pretty trivial in any language including assembly to use it. You think that the library could be re-written in Python within that twenty seconds? You're making an utterly fallacious argument by comparing one language with libraries assumed pre-made on one hand, and another language with libraries assumed non-existent on the other hand.


A significant number of python's libraries are written in python itself, actually (including Python's string formatting, as it turns out). Calling that in assembly would first require embedding python. At which point, any advantage of using assembly goes right out the window. Same for Java and .NET libraries.

Also, the transparent conversion from int to big numbers requires language-level support, you will not find that in a library.

KermMartian wrote:
Well, now you're just proving my points for me; I love arguing with you. 0x5


What point, exactly? Are you disagreeing with the numbers I provided, or are you disagreeing with Python is faster to code in than Java which is faster to code in than C which is way faster to code in than assembly?
Kllrnohj wrote:
Also, the transparent conversion from int to big numbers requires language-level support, you will not find that in a library.
Transparent, sure, but considering that the point of writing in assembly is often to write very tight, very fast code, I think some explicit conversion is more than acceptable, and indeed desirable when you don't want any surprise side effects to come back to bite you.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 2 of 2
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement