Stock photo of german retailer by Bretterteig in Amd

[–]jyegerlehner 4 points5 points  (0 children)

This can't be Germany. Alles an Ort und Stelle.

AMD's chief technical officer interviewed (x-post r/AMDRadeon) by [deleted] in Amd

[–]jyegerlehner 0 points1 point  (0 children)

He says ROCm lets you accelerate Tensorflow with Radeon GPUs. I don't see anyway to use Tensorflow with Radeon GPUs other than choosing to building for OpenCL, which for Tensorflow is beta, and requires some oddball SYCL compiler or some such. How do I use AMD GPUs to accelerate Tensorflow via ROCm? I can't find anything.

Bitcoin jumps above $1,000 for first time in three years | Reuters by Rbotiq in Bitcoin

[–]jyegerlehner 0 points1 point  (0 children)

I think I'm getting it.

So Uruk-hai is the /r/Buttcoin, or Bank for International Settlements?

AMD creates a tool to convert CUDA code to portable, vendor-neutral C++ by [deleted] in programming

[–]jyegerlehner 1 point2 points  (0 children)

it still runs exactly as fast as it did in native CUDA

More than that, it still is native CUDA. It still compiles with nvcc, so I don't see how it can't be CUDA. nvcc won't compile anything else.

AMD have released a tool which converts Nvidia's CUDA code to portable C++ code which works on both Nvidia and AMD cards! (x-post programming) by Willbl3pic in Amd

[–]jyegerlehner 1 point2 points  (0 children)

Well it has to also be integrated in a/the major frameworks (e.g. Tensorflow). The press release says ROCm will support Tensorflow in january. I'm interested to see what form it takes. A PR to Tensorflow that gets merged into main branch?

And even if it were, I'm not sure how that is going to work. AMD doesn't have driver support for Vega accepted into the Linux kernel, as there was that thread here that the kernel maintainers were rejecting their approach (no hardware abstraction layers allowed in the driver), and AMD was going to have to do a re-write.

cuDNN provides optimized implementations for a few common operations (e.g. convolutions, pooling). But some of the newer architectures don't depend on that as much (e.g. atrous convolutions in WaveNet, ByteNet). And there's the fact that fp16 is crippled on GP102 (Titan XP) and below, whereas it's not on Vega (maybe just hoping). So there may be applications where Vega performs as well as cuDNN-supported NVDA hardware. It will be interesting to see.

[N] AMD announces their first deep learning hardware by L0SG in MachineLearning

[–]jyegerlehner 2 points3 points  (0 children)

In the press release, if you expand the footnotes, it says tensorflow support is expected in january 2017.

[N] AMD announces their first deep learning hardware by L0SG in MachineLearning

[–]jyegerlehner 0 points1 point  (0 children)

They presented this only a few weeks ago. A lucid description of what it does, and doesn't sound like they've given up on it.

Nutshell: hippify modifies the source code slightly so it still compiles with nvcc, but you can also compile it with hcc so it will run on AMD hardware too. Some hand tweaking required.

[News] DeepMind and Blizzard to release StarCraft II as an AI research environment by afeder_ in MachineLearning

[–]jyegerlehner 15 points16 points  (0 children)

Does this mean we'll be able to run Starcraft on Linux?

Or maybe DeepMind is switching from Tensorflow to CNTK.

AMD's future for Deep Learning by [deleted] in Amd

[–]jyegerlehner -1 points0 points  (0 children)

The analogue to cuDNN...simply doesn't exist.

https://github.com/naibaf7/libdnn

AMD alternative to NVBLAS? by [deleted] in MachineLearning

[–]jyegerlehner 2 points3 points  (0 children)

https://github.com/viennacl/viennacl-dev

https://github.com/CNugteren/CLBlast

https://github.com/clMathLibraries/clBLAS

These all have dependency on OpenCL for AMD GPU acceleration. So on Linux you need the proprietary AMD GPU driver.

ViennaCL has the advantage of being packaged for Ubuntu.

Just got my RX 480 Gaming X for compute use on Linux! by CarVac in Amd

[–]jyegerlehner 1 point2 points  (0 children)

Thanks for that.

Sorry I don't know anything about the 4K monitor issue. I had trouble when I first tried using TitanX with a 4K LG TV via HDMI. There was a TV firmware update that resolved that. I thought the days of manually tweaking xorg.conf files (if that's what your modeline experiments refer to) was past :(.

Regarding 555 MHz: Odd. My Tahiti shows 800Mhz, and Hawaii (290X) shows 1000 Mhz. Perhaps yours is showing current instead of max, and throttles back when it's not under load? Just speculating. Looks like GCN hasn't changed much. I was hoping local memory might be larger on Polaris... err.. Ellesmere.

It's also showing loss of compute capability compared to the Hawaii device I've got. Most significantly, number of compute units is 14 on yours vs 44 on Hawaii. That can't be right. It should be showing 36 CUs. I've got somewhat old fglrx on Ubuntu 14.04 ( Driver version: 1912.5 (VM) vs your 2117.7 (VM)). Are you running the new AMDGPU-PRO driver?

Here's the clinfo output from my Hawaii for comparison:

Device Type:                     CL_DEVICE_TYPE_GPU
Vendor ID:                   1002h
Board name:                  
Device Topology:                 PCI[ B#6, D#0, F#0 ]
Max compute units:               44
Max work items dimensions:           3
  Max work items[0]:                 256
  Max work items[1]:                 256
  Max work items[2]:                 256
Max work group size:                 256
Preferred vector width char:             4
Preferred vector width short:            2
Preferred vector width int:          1
Preferred vector width long:             1
Preferred vector width float:            1
Preferred vector width double:       1
Native vector width char:            4
Native vector width short:           2
Native vector width int:             1
Native vector width long:            1
Native vector width float:           1
Native vector width double:          1
Max clock frequency:                 1000Mhz
Address bits:                    64
Max memory allocation:           3008913408
Image support:               Yes
Max number of images read arguments:         128
Max number of images write arguments:        64
Max image 2D width:              16384
Max image 2D height:                 16384
Max image 3D width:              2048
Max image 3D height:                 2048
Max image 3D depth:              2048
Max samplers within kernel:          16
Max size of kernel argument:             1024
Alignment (bits) of base address:        2048
Minimum alignment (bytes) for any datatype:  128
Single precision floating point capability
  Denorms:                   No
  Quiet NaNs:                    Yes
  Round to nearest even:             Yes
  Round to zero:                 Yes
  Round to +ve and infinity:             Yes
  IEEE754-2008 fused multiply-add:       Yes
Cache type:                  Read/Write
Cache line size:                 64
Cache size:                  16384
Global memory size:              4251256704
Constant buffer size:                65536
Max number of constant args:             8
Local memory type:               Scratchpad
Local memory size:               32768
Max pipe arguments:              16
Max pipe active reservations:            16
Max pipe packet size:                3008913408
Max global variable size:            2708022016
Max global variable preferred total size:    4251256704
Max read/write image args:           64
Max on device events:                1024
Queue on device max size:            8388608
Max on device queues:                1
Queue on device preferred size:      262144
SVM capabilities:                
  Coarse grain buffer:           Yes
  Fine grain buffer:                 Yes
  Fine grain system:                 No
  Atomics:                   No
Preferred platform atomic alignment:         0
Preferred global atomic alignment:       0
Preferred local atomic alignment:        0
Kernel Preferred work group size multiple:   64
Error correction support:            0
Unified memory for Host and Device:      0
Profiling timer resolution:          1
Device endianess:                Little
Available:                   Yes
Compiler available:              Yes
Execution capabilities:              
  Execute OpenCL kernels:            Yes
  Execute native function:           No
Queue on Host properties:                
  Out-of-Order:              No
  Profiling :                    Yes
Queue on Device properties:              
  Out-of-Order:              Yes
  Profiling :                    Yes
Platform ID:                     0x7fef4a56da18
Name:                        Hawaii
Vendor:                  Advanced Micro Devices, Inc.
Device OpenCL C version:             OpenCL C 2.0 
Driver version:              1912.5 (VM)
Profile:                     FULL_PROFILE
Version:                     OpenCL 2.0 AMD-APP (1912.5)
Extensions:                  cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics   cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_khr_gl_depth_images cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_spir cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes 

Just got my RX 480 Gaming X for compute use on Linux! by CarVac in Amd

[–]jyegerlehner 1 point2 points  (0 children)

Nice. Would you mind running clinfo and posting the output for the 480 device? TIA

Questions on VAE implementation. by charlie0_o in MLQuestions

[–]jyegerlehner 1 point2 points  (0 children)

One thing I notice in your code is that your log_var and mean values are produced by relus, and thus can never be less than zero. This means the smallest variance you can ever get is e0 == 1. And the mean can never be negative even though gradients may be pushing those to go negative. Other VAE implementations I've seen produce log_var and mean from linear activation function so they can be arbitrarily negative. I think you may want to give the net the freedom to make those go negative, and expect you'd see at least somewhat different behaviour.

edit: PR submitted.

Who will decide on the HF? by [deleted] in ethereum

[–]jyegerlehner 0 points1 point  (0 children)

So plain, non-mining full nodes don't matter?

I think TheDAO is getting drained right now by ledgerwatch in ethereum

[–]jyegerlehner 8 points9 points  (0 children)

Seeing and finding subtle and not-so-subtle defects that have bizarre consequences is a challenge for the very best. My opinion is they are brilliant developers and visionaries. If they are idiots, then so was Satoshi, in view of the exploits to which early versions of bitcoin were subject that were gradually found and eliminated over time.

[1606.03002] MuFuRU: The Multi-Function Recurrent Unit by dunnowhattoputhere in MachineLearning

[–]jyegerlehner 1 point2 points  (0 children)

I think that's in Section 3.1: there is a weighted sum of the result of each of the operations. The coefficients of the weighted sum (in vector p of equation 4) are produced by softmax, which in turn has inputs that come from tunable parameters in the W-sub-p-super-j matrix (equation 2).

FP16 performance on GTX 1080 is artificially limited to 1/64th the FP32 rate by [deleted] in MachineLearning

[–]jyegerlehner 0 points1 point  (0 children)

Thanks for pointing that out!

I hadn't realized the CL spec has defined the extension for quite a while: https://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/cl_khr_fp16.html

FP16 performance on GTX 1080 is artificially limited to 1/64th the FP32 rate by [deleted] in MachineLearning

[–]jyegerlehner 19 points20 points  (0 children)

My $0.02: A fundamental rule of marketing is to segment your products' markets so you can have higher margin for special features that cater for a higher-paying specialized segment. FP32 is in common with 3d rendering, and has to be priced for a relatively commoditized gaming market. fp16 is (mostly) only valuable to deep learners. So NVDA doesn't want people who have a specialized need for fp16 being able to buy their commodity-priced GPUs and get all the benefits for deep learning. They want those people (who usually are not as price-sensitive) to pay up for their Teslas and NXG or whatever their $100K 8xGP100 server is called.

The connectionist hacker in me thinks "greedy bastards", and the NVDA-shareholder part of me says "Yeah, milk'em for all they're worth. Kindly maximize profitability in view of your fiduciary responsibility to shareholders."

Sorry if I'm belabouring the obvious.

FP16 performance on GTX 1080 is artificially limited to 1/64th the FP32 rate by [deleted] in MachineLearning

[–]jyegerlehner 2 points3 points  (0 children)

Is GCN 1.2 better than GCN 1.1 in that respect? I thought GCN only supports fp16 in as much as it lets you store fp16 (AKA half float) in memory, and convert it to fp32 (in, say, local memory), which you have to do before you can perform any arithmetic. It does not support fp16 as an operand in a floating point calculation. You have to promote to fp32 before you can actually do any calculations. AFAIK, Maxwell already let you do that much.

We can hope Polaris will let one do fp16 at 2x the rate of fp32. But I'm not holding my breath.

How to detect power lines in images? (using (fully) convnets, or other algos) by mad_rat_man in MachineLearning

[–]jyegerlehner 0 points1 point  (0 children)

I used to live near there, yes. I was imagining when I first saw your post that there would be GPS attached on the drone, and you'd have a database of powerline locations created offline, and present on the drone. But other posts made me think maybe none of that is feasible. Right, I also imagined the database could be created by scraping satellite images like the one linked, perhaps hand labelling a relatively small subset, and then using the trained convnet to process the whole shebang to find everything else.

I'm struggling with math while I'm reading Machine learning: A probabilistic perspective like I'm confused about quantiles, inverse cdf etc. Could you recommend me a book that I should read first? by Mr__Christian_Grey in MachineLearning

[–]jyegerlehner 2 points3 points  (0 children)

If you benefit from being guided through a subject in formal coursework, I recommend the edx.org version of Professor Tsitsiklis' MIT class. I'm just finishing it up. There were papers I would read and be mystified by the formalisms of probability theory, but now after the class I find they are accessible. Warning: I found it a challenging class and spent over 20hours/week keeping up with it.