Quantcast
Channel: Machine Learning
Viewing all 57419 articles
Browse latest View live

[P] Fast Sparse PCA using R - An essential tool for your machine learning toolbox!


[D] What is the right way to parallelize rollouts in gym ?

$
0
0

I tried with Python Multiprocessing Process class, it seems that it doesn't work

Given that env = gym.make('CartPole-v0'), we have

For Serial:

def run1(): t = time() env.reset() for _ in range(100): env.step(env.action_space.sample()) print(f'{time() - t} s') 

uses 0.0059 s

For Parallel:

def run2(): t = time() env.reset() l = [] for _ in range(100): p = Process(target=env.step, args=[env.action_space.sample()]) p.start() l.append(p) [p.join() for p in l] print(f'{time() - t} s') 

uses 0.83 s

Both with 100 steps, the parallel version is more than 100 times slower than serial one.

submitted by /u/metaAI
[link] [comments]

[P] Keras implementation of 'Convolutional Sketch Inversion'

[N] AlterEgo: Interfacing with devices through silent speech

[R] Prefrontal cortex as a meta-reinforcement learning system [DeepMind]

[D] Explaining the difference between maximum likelihood, MAP, and Bayesian parameter estimation.

[D] Machine Learning - WAYR (What Are You Reading) - Week 44

$
0
0

This is a place to share machine learning research papers, journals, and articles that you're reading this week. If it relates to what you're researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you've read.

Please try to provide some insight from your understanding and please don't post things which are present in wiki.

Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links.

Previous weeks :

1-1011-2021-3031-4041-50
Week 1Week 11Week 21Week 31Week 41
Week 2Week 12Week 22Week 32Week 42
Week 3Week 13Week 23Week 33Week 43
Week 4Week 14Week 24Week 34
Week 5Week 15Week 25Week 35
Week 6Week 16Week 26Week 36
Week 7Week 17Week 27Week 37
Week 8Week 18Week 28Week 38
Week 9Week 19Week 29Week 39
Week 10Week 20Week 30Week 40

Most upvoted papers two weeks ago:

/u/marcossilva_604: https://arxiv.org/abs/1802.06006.

/u/hohomomo1212: Learning to Play with Intrinsically-Motivated Self-Aware Agents (https://arxiv.org/abs/1802.07442). It is a paper on a form of reinforcement learning without extrinsic rewards. Think of a robot wandering around in the world without being told what to do but instead figuring out what to do by itself using a model of its own abilities. That's why they call it self-aware. Pretty cool :)

Besides that, there are no rules, have fun.

submitted by /u/ML_WAYR_bot
[link] [comments]

[D] Looking for help learning how to read research papers.

$
0
0

I am looking for resources on how to read machine learning research.

In my ideal world someone would provide me with: 1. a general guide on how to do read a machine learning paper

  1. a set of papers that are basic and representative of the literature, and ideally develop fundamental understanding of useful machine learning topics.

  2. a guide sort of "answer key" to these papers that breaks down the key concepts that one should have understood as well as things that might have slipped under the radar of someone less experienced.

  3. Some sort of "book club" (of research papers of course) for those trying to learn either based on the aforementioned set of papers or moving beyond it.

  4. A more experienced machine learning engineer willing to at least somewhat guide this bookclub (ideally lead discussion on occasion, but honestly anyone willing to be a resource in any capacity would be ideally.)

  5. Some way to guide the development of my skill in understanding what's worth reading and what's not.

This is a lot to ask for, at this point I don't have much I can offer in return. If anyone else is interested in the book club idea I'm willing to organize it although if it wasn't obvious I lack the experience to properly curate the resources.

submitted by /u/Cartesian_Currents
[link] [comments]

[N] LSTM inference shoot-out: Intel Skylake vs NVIDIA V100

[N] PyTorch as of April is installable via `pip install torch`

[D] maximum likelihood - how can VAE ignore the latent?

$
0
0

A question about this statement:

"If p(x|z) is given arbitrary flexibility, it can in fact learn to ignore z completely and always output the data distribution for each z: p(x|z) = p(z). ... If you make the generator of a VAE too complex, give it lots of modeling power on top of z, it will ignore your latent variables as they are not needed to achieving a good likelihood."

It is from the blog post in Huszar, Is Maximum Likelihood Useful for Representation Learning, however other papers have said the same.

The question: how is this possible? The VAE needs to maximize the data likelihood term

E__q(z|x)[ log p(x|z) ]

for each data point. So it needs to generate an x on the output that resembles the x on the input. If the generator ignores z, it will always output the same pattern I believe, and so cannot try to match the particular inputs.

Does it have something to do with considering the cost across all the data in a minibatch simultaneously? I do not see it yet.

submitted by /u/knowedgelimited
[link] [comments]

[D] How to encourage competition and prevent "working together" in genetic algorithms

$
0
0

I got bored tonight and decided to write a genetic algorithm for dupl.io, curious to know if there's any way to "get better" or if it's just a matter of wasting a bunch of time.

How the game works:

You can select any tile you own every second, and this will increment it. Once it hits 5, it will take ownership of the 4 adjacent tiles, increment them, and remove ownership of the current tile. You can take over other land with this. Score is determined by the count of each tile added together.

The way I approached this was to rewrite the game, set up neural networks in a genetic setting, inputs are a 7x7 grid around each tile (3 times: 0/1 for on the grid or not, count/4 for own tile, and count/4 for enemy tile, effectively creating 773 inputs). For a given tile owner, I parse each 7x7 grid, get a "score", and the tile with the highest score is selected to upgrade/expand.


So, unfortunately, the way I handled this was by dumping 20 of them (in grid formation, but randomly sorted) into a 41x41 grid and waited 500 ticks and did my genetic reproduction magic. This meant, however, that what took over is an "always go left", because whoever spawns on the left-most side will eat up everybody, and everybody else will "surrender" (I presume if I left it running long enough, it will also develop a "go down afterwards"), maximizing the fittest, but not the average.

So, what could a solution be? I could put each agent against bots so they have no impact on each other, but then they just learn to exploit the pre-written bots and not necessarily improve beyond the bots. I could also have them run live, on the site, however that would take much, much longer.

I feel like this would apply to tons of other situations as well, like chess. If a bot does well enough on white to take over a population, it could have a mutation that makes it always forfeit as black. This would cause that agent to win even more, and then reproduce even more.

P.S. I don't know if this is unrelated, but if they're all playing each other, how could you see improvements anyway? The fitness score might even go down as they improve and get on equal footing. I've tried to look at stats like "tiles claimed" or "tiles stolen" as an indication of whether or not they're getting better, but I'm having no luck.

EDIT: Another solution could be to split up my population of 20 into four populations of 5, so something that takes over in one could only "control" a quarter of the total population.

submitted by /u/DemiPixel
[link] [comments]

[P] Genetic Algorithms starter kit for Swift

[D] Screen check for (summer) internship of deep learning in US

$
0
0

I'm a master student studying in France and applying for some summer internships mostly in US.(For the profile, I have experience working in a lab about Computer Vision). I have some observations and would like to confirm it is true or not:

  • Is that true that some companies will do screen check if the student had already the working permit or not, if they don't have, they will just reject it?

  • Does companies nowadays prefer PhD student than master student?

submitted by /u/Baduglyboy
[link] [comments]

[D] looking for help to choose between two machine learning internships.

$
0
0

I have two internship options, both in the field of machine learning. They are as follows

Just to mention, I'm a beginner at machine learning.

Company 1: -Work is related to detecting pattern in wireless sensor data.

-Would have to build models in Python (which I prefer)

-Would be comparetivly easy and give me free time to pursue online courses in machine learning and math. (This is the biggest advantage for me, as I am weak at math and would have to use it a lot in my thesis).

-There would be no mentor, would have to do it all by myself (only the professor to guide me). (This is the biggest drawback, as they don't have an experienced ML engineer and no existing model I can work with).

Company 2 -Works with computer vision (which I don't know).

-Would have to implement algorithms in c++ (which I hear is hard as several algorithms have to be implemented from scratch).

-Would be hard cause I have no knowledge of computer vision, or good knowledge of machine learning and it won't give me time to pursue online courses to solidify my skills.(biggest drawback, as my math is weak and wouldn't give me time to brush up on it.)

-There would be an experienced ML engineer and a computer vision expert to guide me, and they have an existing product that I would be working on (biggest advantage, would give me a chance to learn a lot.)

My end goal is not to work in the field of computer vision. Just learn machine learning and be good at it. Which one should I go for? First company gives me time and flexibility to be better at ML and math myself but no professional guidance, and second one doesn't give me time and flexibility to be better at math but gives me a chance to learn from experts and work on an already existing system.

Please help me choose.

submitted by /u/thatsadsid
[link] [comments]

[D] Open Research problems in Deep Learning and Computer Vision?

$
0
0

I currently explore Image Captioning, VQA, and Scene Understanding and I want to gain a deeper understanding. What are some important and open research problems in these areas?

submitted by /u/cbsudux
[link] [comments]

[D] Waht advantages could a quantum computer have for DL?

[R] Neural Autoregressive Flows

[P] Keras implementation of a CNN for age and gender estimation

[P] Tsetlin Machine Python implementation

Viewing all 57419 articles
Browse latest View live


Latest Images