Quantcast
Channel: Machine Learning
Viewing all 57546 articles
Browse latest View live

[R] Hopkins faculty promote better climate in machine learning


[R] CFP: KDD 2018 Workshop on Outlier Detection De-constructed

[D] Help (re-)finding a tic-tac-toe reward hacking story

$
0
0

In the past 6 months I heard a colloquial story about reward hacking in Reinforcement Learning somewhere, and I can't find the reference now.

In the story, students in a university computer science course were asked to write an AI to play an infinite-grid variant of tic-tac-toe. Their implementations were run competitively against one another at test time. One student used some sort of optimisation algorithm that discovered an awesome 'hack' - by placing a move very from the origin, often the opponent's code would run out of memory trying to process the move. As such, this would beat many opponents.

Very cool story, but I can't for the life of me find where I heard it. I think it might have been an r/MachineLearning post, or maybe during one of the ICML talks last year. Does anyone here know the story, and could you point me to it somewhere online?

submitted by /u/aaronsnoswell
[link] [comments]

[D] If I could synthesize any high quality image data set for you with class/feature point/mask annotations, what would you pick?

$
0
0

I've been working on high quality, 3D synthesis of feature points on the hand for training my network, but could likely apply the technique to any 3D object. I can generate hundreds of thousands of images/annotations in very little time. Optionally, what would you be willing to pay to license or purchase the generated data?

submitted by /u/hwoolery
[link] [comments]

[P] LDA – How to grid search best topic models? (with complete examples in python)

[D] Can I use the Instance Norm for image classification problems?

$
0
0

Hi

In general, for image classification problems, use the batch norm. But I wonder if using the instance norm will result in poor performance? I wonder if anyone has tried it.

submitted by /u/taki0112
[link] [comments]

[D] Question - Does having shared weights make runtime faster (from an implementation standpoint), or is the benefit just for faster training?

$
0
0

As above. What are the main advantages for having shared weights, say in a recurrent structure. For runtime. Does it use less memory? Will it run faster? For training. Will it train faster since its more constrained?

submitted by /u/soulslicer0
[link] [comments]

[P] Intro to XGBoost (detecting Parkinson's with XGBoost Classifiers)


[R] ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS

[R] A Robust Real-Time Automatic License Plate Recognition based on the YOLO Detector (comprehensible paper with public dataset and weights)

[R] The Emergence of Spectral Universality in Deep Networks

[D] Preventing exploding gradients when using ReLU?

$
0
0

In something I'm currently working on, I've found that switching out my ReLU activations for sigmoids actually ends up letting my network perform better (on MNIST, which is what I'm testing things on). I have found that using ReLU activations, my gradients start to explode quickly and everything NaNs out.

I've tried searching around and I don't see too many people talking about ReLUs causing gradients to explode, but I don't think there's anything wrong with my implementation so I'm wondering if there are any strategies I can look into to see how to control these gradient values, as I'd prefer to use ReLU over sigmoid in general.

TLDR: I was trying to use ReLU activations with softmax + cross entropy at the output, found that gradients were exploding. Switched to sigmoids and everything calmed down and worked better, but want to figure out how I can tame the gradients using ReLUs.

submitted by /u/ConfuciusBateman
[link] [comments]

[N] Google Staffers Demand End to Work on Pentagon AI project

[R] Super fast algorithm for solving inverse problems regularized with total variation

[P] Google Colaboratory: Automatic Tensorflow/Keras Checkpoints in your Google Drive


[N] AI Grant 3.0: $2500 in cash and $20,000 in cloud credits. Apply by April 14th!

$
0
0

Hi, /r/ml!

Last year my friend Daniel and I launched http://aigrant.org to give away money to people doing interesting open source AI projects. We gave out $100,000 in grants to 30 different projects. You can see some of the winners here: http://aigrant.org/#finalists

We're launching another round of AI Grant. This time, we're giving away up to 30 grants. We reduced the cash amount to $2500, but Google has stepped in to sponsor $20,000 per grantee in GCE credits, which is phenomenal.

The program is open to anyone; no credentials required. We tried to keep the application process as simple as possible so that you can apply in 30 minutes instead of spending days writing a grant proposal. You can apply at https://aigrant.org/apply.html

Applications are due April 14th. We'd be happy to answer questions and would welcome any feedback or ideas.

Thanks!

submitted by /u/nataigrant
[link] [comments]

[P] The Annotated Transformer: Line-by-Line PyTorch implementation of "Attention is All You Need"

[P] DeepLearn - Implementation and reproducible code for deep learning papers on NLP(QA, sentence matching, attention, knowledge base completion), CV(transfer learning, multi-modal learning), Audio(scene recognition, tagging).

[D] How to (actually) easily detect objects with deep learning

[R] Error Curvature Optimization: Alternative to first-order derivative learning

Viewing all 57546 articles
Browse latest View live




Latest Images