blocks of memory. Such a representation while producing the expected assembly will reduce the transformations we have at hand for optimizing said strucuture. An example would be slicing arrays where we would have to do thing manually in C/C++ and either rely on templates to abstract it away of be very careful in operating on the array. Higher level languages which view arrays as objects would simply produce two new array objects for us to operate on. This would deal with copying in case we need the original array, in-place modification if we do not reference the original since we can simply treat the new array objects as regular arrays even when they are just "views" of a much larger array. Again in C/C++ you would have to do all of the work manually which is very error prone and is usually avoided as reliable software has a higher value than efficiency of certain operations. We lose a possible optimization simply by C/C++'s representation of arrays not being abstract enough. This is one possible out of many more such as loop fusion which is done but not a guarantee as it is in haskell with folds (if it's an explicit fold we can remove all immediate representations and fuse loops). This is why in haskell we gain performance by using a higher level representation and operations rather than lose it as the compiler has much more to work with. Catamorphisms and Anamorphisms used for mergesort comes into mind as they seem to be inefficient as ana builds structures only for cata to reduce them again directly after. However, given that cata is an explicit fold and ana is an explicit unfold the compiler (GHC) is able to detect this and erase all immediate structures and transform the algorithm to a near hand optimized core representation. Why? Haskell is aware of folds. Could C do such things? No, unless you make C aware of more it is simply not possible to take advantage of such things. Can Java do this? Yes for anything relying on higher level abstract types since the compiler is aware of it.
Source?
Обсуждают сегодня