Can I use an Actionscript code output to use as an input for python? - actionscript-3

I have a project of image processing using SURF. My SURF algo was coded in actionscript and im trying to put its output as an input to classifier in python. Is that possible? Can i do train with that?

Related

Preparing images to feed into tensorflow as datasets

AskTensorFlow
I have used the tensorflow packaged datasets like MNIST, IMDB to study the working of tensorflow. However, in practical applications we have to preprocess and prepare the dataset on our own. Suppose I am working with image dataset, so I want to preprocess them to a format that can be fed into a tensorflow model. How can I preprocess image dataset to a tensorflow format?
When working with images, you will usually use a generator.
Generator is a function which output (u,v), where u are the samples, and v are the labels.
Example on how to do this can be found here How to train TensorFlow network using a generator to produce inputs?.
When building the generator function to deal with image, remember that every image is just an array, of either (x,y) for greyscale, or (x,y,channels) for color image.
Thus your generator function will need to read a batch of images from the disk, and turn them into arrays. there are plenty of tools to handle this: opencv , scipy, PIL.
After loading the images you can do any manipulations you like on them (using these tools, or others), usualy you will need to reshape the image to fit your model.
In the end you will need to output a pair ([batch_size,x,y,channels], [batch_size,labels]).

pix2pix - get intermediate layers output as PNG using Python

I need some help with getting PNG images of pix2pix intermediate layers (generators), using pix2pix.py file and Tensorflow, while run test session. thanks for helping.

How do i create and train tensorflow model with audio inputs?

I've audio files say "left.wav", "right.wav" and so forth, I want to create a model which takes audio as input and output label "left" or "right" etc.
Question
How do I feed my raw audio to my neural network ?
The scipy.io.wavfile.read() function will return the sample rate and the whole audio in a numpy array.
You can then feed that to your network.
import scipy
rate, numpy_audio = scipy.io.wavfile.read( "left.wav" )
If you want to do speech recognition, check out DeepSpeech, it's a large project, but you can probably get some good ideas there.
For a simpler intro, Tensorflow has a Simple Audio Recognition tutorial.
To generate audio, you might want to consider WaveNet - this is one particular implementation.

Can we use Yolo to detect and recognize text in a image

Currently I am using a deep learing model which is called "Yolov2" for object detection, and I want to use it to extract text and use save it in disk, but i don't know how to do that, if anyone know more about that, please advice me
I use Tensorflow
Thanks
If you use the pretrained model, you would need to save those outputs and input the images into a character recognition network, if using neural net, or another approach.
What you are doing is "scene text recognition". You can check out the Reading Text in the Wild with Convolutional Neural Networks paper, here's a demo and homepage. Github user chongyangtao has a whole list of resources on the topic.
I have a similar question and I am making a digit detection model with svhn dataset. It is not a finished project yet, but it seems to work well. You can see the code at Yolo-digit-detector.

SVM Training C#

I've been assigned to read text from captcha images.
Input images training images are given.
The part where I'm stuck is I've to train SVM (with that given training sample data). AFter a hell lot of search and headache, I still don't know how to start SVM training.I've install openCV from NuGet, EMGU CV and accord.net but still fruitless.
Any help would be really appreciated.
Captcha Images are like this.
This is what my instructor step by step told:
1. There is Captcha data in the test folders. Divide the test data into two parts, training data (60%) and test data (40%)
2. Figure out a way to segment out the 4 letters in each image of training data.
3. Figure out a way to store them automatically in Class folders
(all As in folder named “A” etc)
4. Train a SVM on the segmented data to get a training file.
5. Use the Test data with the trained file to read the captchas.
6. Report accuracy automatically.
I'm badly stuck at step 4.

Resources