When you use Fabric for public beta releases (signed with prod config), DON'T USE Google Play App Signing. You will must after build two signed apks!
When you distribute to more play stores (samsung, amazon, xiaomi, ...) you will must again build two signed apks.
So be really carefull with Google Play App Signing.
It's not possible to revert it :/ and Google Play did not after accept apks signed with production key. After enable Google Play App Signing only upload key is accepted...
It really complicate CI distribution...
Next issues with upgrade: https://issuetracker.google.com/issues/69285256
It is so simple to do it
if you want to use it as host and want to resolve it
If you are using ambassador to any other API gateway for service located in another namespace it's always suggested to use :
Use : <service name>
Use : <service.name>.<namespace name>
Not : <service.name>.<namespace name>.svc.cluster.local
it will be like : servicename.namespacename.svc.cluster.local
this will send request to a particular service inside the namespace you have mention.
example:
kind: Service
apiVersion: v1
metadata:
name: service
spec:
type: ExternalName
externalName: <servicename>.<namespace>.svc.cluster.local
Here replace the <servicename>
and <namespace>
with the appropriate value.
In Kubernetes, namespaces are used to create virtual environment but all are connect with each other.
I think the main difference I can describe relates to record oriented vs. column oriented formats. Record oriented formats are what we're all used to -- text files, delimited formats like CSV, TSV. AVRO is slightly cooler than those because it can change schema over time, e.g. adding or removing columns from a record. Other tricks of various formats (especially including compression) involve whether a format can be split -- that is, can you read a block of records from anywhere in the dataset and still know it's schema? But here's more detail on columnar formats like Parquet.
Parquet, and other columnar formats handle a common Hadoop situation very efficiently. It is common to have tables (datasets) having many more columns than you would expect in a well-designed relational database -- a hundred or two hundred columns is not unusual. This is so because we often use Hadoop as a place to denormalize data from relational formats -- yes, you get lots of repeated values and many tables all flattened into a single one. But it becomes much easier to query since all the joins are worked out. There are other advantages such as retaining state-in-time data. So anyway it's common to have a boatload of columns in a table.
Let's say there are 132 columns, and some of them are really long text fields, each different column one following the other and use up maybe 10K per record.
While querying these tables is easy with SQL standpoint, it's common that you'll want to get some range of records based on only a few of those hundred-plus columns. For example, you might want all of the records in February and March for customers with sales > $500.
To do this in a row format the query would need to scan every record of the dataset. Read the first row, parse the record into fields (columns) and get the date and sales columns, include it in your result if it satisfies the condition. Repeat. If you have 10 years (120 months) of history, you're reading every single record just to find 2 of those months. Of course this is a great opportunity to use a partition on year and month, but even so, you're reading and parsing 10K of each record/row for those two months just to find whether the customer's sales are > $500.
In a columnar format, each column (field) of a record is stored with others of its kind, spread all over many different blocks on the disk -- columns for year together, columns for month together, columns for customer employee handbook (or other long text), and all the others that make those records so huge all in their own separate place on the disk, and of course columns for sales together. Well heck, date and months are numbers, and so are sales -- they are just a few bytes. Wouldn't it be great if we only had to read a few bytes for each record to determine which records matched our query? Columnar storage to the rescue!
Even without partitions, scanning the small fields needed to satisfy our query is super-fast -- they are all in order by record, and all the same size, so the disk seeks over much less data checking for included records. No need to read through that employee handbook and other long text fields -- just ignore them. So, by grouping columns with each other, instead of rows, you can almost always scan less data. Win!
But wait, it gets better. If your query only needed to know those values and a few more (let's say 10 of the 132 columns) and didn't care about that employee handbook column, once it had picked the right records to return, it would now only have to go back to the 10 columns it needed to render the results, ignoring the other 122 of the 132 in our dataset. Again, we skip a lot of reading.
(Note: for this reason, columnar formats are a lousy choice when doing straight transformations, for example, if you're joining all of two tables into one big(ger) result set that you're saving as a new table, the sources are going to get scanned completely anyway, so there's not a lot of benefit in read performance, and because columnar formats need to remember more about the where stuff is, they use more memory than a similar row format).
One more benefit of columnar: data is spread around. To get a single record, you can have 132 workers each read (and write) data from/to 132 different places on 132 blocks of data. Yay for parallelization!
And now for the clincher: compression algorithms work much better when it can find repeating patterns. You could compress AABBBBBBCCCCCCCCCCCCCCCC
as 2A6B16C
but ABCABCBCBCBCCCCCCCCCCCCCC
wouldn't get as small (well, actually, in this case it would, but trust me :-) ). So once again, less reading. And writing too.
So we read a lot less data to answer common queries, it's potentially faster to read and write in parallel, and compression tends to work much better.
Columnar is great when your input side is large, and your output is a filtered subset: from big to little is great. Not as beneficial when the input and outputs are about the same.
But in our case, Impala took our old Hive queries that ran in 5, 10, 20 or 30 minutes, and finished most in a few seconds or a minute.
Hope this helps answer at least part of your question!
Using union will help in this case.
You can also use join on a condition that always returns true and is not related to data in these tables.See below
select tmd .name,tbc.goals from tblMadrid tmd join tblBarcelona tbc on 1=1;
The Json conversion should work out-of-the box. In order this to happen you need add some simple configurations:
First add a contentNegotiationManager into your spring config file. It is responsible for negotiating the response type:
<bean id="contentNegotiationManager"
class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean">
<property name="favorPathExtension" value="false" />
<property name="favorParameter" value="true" />
<property name="ignoreAcceptHeader" value="true" />
<property name="useJaf" value="false" />
<property name="defaultContentType" value="application/json" />
<property name="mediaTypes">
<map>
<entry key="json" value="application/json" />
<entry key="xml" value="application/xml" />
</map>
</property>
</bean>
<mvc:annotation-driven
content-negotiation-manager="contentNegotiationManager" />
<context:annotation-config />
Then add Jackson2 jars (jackson-databind and jackson-core) in the service's class path. Jackson is responsible for the data serialization to JSON. Spring will detect these and initialize the MappingJackson2HttpMessageConverter automatically for you. Having only this configured I have my automatic conversion to JSON working. The described config has an additional benefit of giving you the possibility to serialize to XML if you set accept:application/xml header.
Indices with duplicate values often arise if you create a DataFrame by concatenating other DataFrames. IF you don't care about preserving the values of your index, and you want them to be unique values, when you concatenate the the data, set ignore_index=True
.
Alternatively, to overwrite your current index with a new one, instead of using df.reindex()
, set:
df.index = new_index
This test shows that your second option can perform significantly better. Findings first, then the code:
one filter with predicate of form u -> exp1 && exp2, list size 10000000, averaged over 100 runs: LongSummaryStatistics{count=100, sum=4142, min=29, average=41.420000, max=82}
two filters with predicates of form u -> exp1, list size 10000000, averaged over 100 runs: LongSummaryStatistics{count=100, sum=13315, min=117, average=133.150000, max=153}
one filter with predicate of form predOne.and(pred2), list size 10000000, averaged over 100 runs: LongSummaryStatistics{count=100, sum=10320, min=82, average=103.200000, max=127}
now the code:
enum Gender {
FEMALE,
MALE
}
static class User {
Gender gender;
int age;
public User(Gender gender, int age){
this.gender = gender;
this.age = age;
}
public Gender getGender() {
return gender;
}
public void setGender(Gender gender) {
this.gender = gender;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
}
static long test1(List<User> users){
long time1 = System.currentTimeMillis();
users.stream()
.filter((u) -> u.getGender() == Gender.FEMALE && u.getAge() % 2 == 0)
.allMatch(u -> true); // least overhead terminal function I can think of
long time2 = System.currentTimeMillis();
return time2 - time1;
}
static long test2(List<User> users){
long time1 = System.currentTimeMillis();
users.stream()
.filter(u -> u.getGender() == Gender.FEMALE)
.filter(u -> u.getAge() % 2 == 0)
.allMatch(u -> true); // least overhead terminal function I can think of
long time2 = System.currentTimeMillis();
return time2 - time1;
}
static long test3(List<User> users){
long time1 = System.currentTimeMillis();
users.stream()
.filter(((Predicate<User>) u -> u.getGender() == Gender.FEMALE).and(u -> u.getAge() % 2 == 0))
.allMatch(u -> true); // least overhead terminal function I can think of
long time2 = System.currentTimeMillis();
return time2 - time1;
}
public static void main(String... args) {
int size = 10000000;
List<User> users =
IntStream.range(0,size)
.mapToObj(i -> i % 2 == 0 ? new User(Gender.MALE, i % 100) : new User(Gender.FEMALE, i % 100))
.collect(Collectors.toCollection(()->new ArrayList<>(size)));
repeat("one filter with predicate of form u -> exp1 && exp2", users, Temp::test1, 100);
repeat("two filters with predicates of form u -> exp1", users, Temp::test2, 100);
repeat("one filter with predicate of form predOne.and(pred2)", users, Temp::test3, 100);
}
private static void repeat(String name, List<User> users, ToLongFunction<List<User>> test, int iterations) {
System.out.println(name + ", list size " + users.size() + ", averaged over " + iterations + " runs: " + IntStream.range(0, iterations)
.mapToLong(i -> test.applyAsLong(users))
.summaryStatistics());
}
For people using Tiny Core Linux, you also need to install libtool-dev
as it has the *.m4 files needed for libtoolize
.
I will try to demonstrate Encapsulation and Abstraction in a simple way.. Lets see..
Encapsulation is -
Encapsulation implements Abstraction.
And Abstraction is -
Lets see an example-
The below Image shows a GUI of "Customer Details to be ADD-ed into a Database".
By looking at the Image we can say that we need a Customer Class.
Step - 1: What does my Customer Class needs?
i.e.
2 variables to store Customer Code and Customer Name.
1 Function to Add the Customer Code and Customer Name into Database.
namespace CustomerContent
{
public class Customer
{
public string CustomerCode = "";
public string CustomerName = "";
public void ADD()
{
//my DB code will go here
}
Now only ADD method wont work here alone.
Step -2: How will the validation work, ADD Function act?
We will need Database Connection code and Validation Code (Extra Methods).
public bool Validate()
{
//Granular Customer Code and Name
return true;
}
public bool CreateDBObject()
{
//DB Connection Code
return true;
}
class Program
{
static void main(String[] args)
{
CustomerComponent.Customer obj = new CustomerComponent.Customer;
obj.CustomerCode = "s001";
obj.CustomerName = "Mac";
obj.Validate();
obj.CreateDBObject();
obj.ADD();
}
}
Now there is no need of showing the Extra Methods(Validate()
; CreateDBObject()
[Complicated and Extra method] ) to the End User.End user only needs to see and know about Customer Code, Customer Name and ADD button which will ADD the record.. End User doesn't care about HOW it will ADD the Data to Database?.
Step -3: Private the extra and complicated methods which doesn't involves End User's Interaction.
So making those Complicated and Extra method as Private instead Public(i.e Hiding those methods) and deleting the obj.Validate();
obj.CreateDBObject();
from main in class Program we achieve Encapsulation.
In other words Simplifying Interface to End User is Encapsulation.
So now the complete code looks like as below -
namespace CustomerContent
{
public class Customer
{
public string CustomerCode = "";
public string CustomerName = "";
public void ADD()
{
//my DB code will go here
}
private bool Validate()
{
//Granular Customer Code and Name
return true;
}
private bool CreateDBObject()
{
//DB Connection Code
return true;
}
class Program
{
static void main(String[] args)
{
CustomerComponent.Customer obj = new CustomerComponent.Customer;
obj.CustomerCode = "s001";
obj.CustomerName = "Mac";
obj.ADD();
}
}
Summary :
Step -1: What does my Customer Class needs? is Abstraction.
Step -3: Step -3: Private the extra and complicated methods which doesn't involves End User's Interaction is Encapsulation.
P.S. - The code above is hard and fast.
UPDATE: There is an video on this link to explain the sample: What is the difference between Abstraction and Encapsulation
If you're using Maven and the latest Jackson code then you can remove all the Jackson-specific configuration from your spring configuration XML files (you'll still need an annotation-driven tag <mvc:annotation-driven/>) and simply add some Jackson dependencies to your pom.xml file. See below for an example of the dependencies. This worked for me and I'm using:
spring-security version 3.1.0.RELEASE.
...<dependencies>
...
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.2.3</version>
</dependency>
...
</dependencies>...
data abstraction: accessing data members and member functions of any class is simply called data abstraction.....
encapsulation: binding variables and functions or 1 can say data members or member functions all together in a single unit is called as data encapsulation....
One more variation
Many of the operations done in pandas can also be done as a db query (sql, mongo)
Using a RDBMS or mongodb allows you to perform some of the aggregations in the DB Query (which is optimized for large data, and uses cache and indexes efficiently)
Later, you can perform post processing using pandas.
The advantage of this method is that you gain the DB optimizations for working with large data, while still defining the logic in a high level declarative syntax - and not having to deal with the details of deciding what to do in memory and what to do out of core.
And although the query language and pandas are different, it's usually not complicated to translate part of the logic from one to another.
I think you can reset the invocations using Mockito.reset(mockLog). You should call this before every test, so inside @Before would be a good place.
This sentence is a UDP joke, but I'm not sure that you'll get it. The below conversation is a TCP/IP joke:
A: Do you want to hear a TCP/IP joke?
B: Yes, I want to hear a TCP/IP joke.
A: Ok, are you ready to hear a TCP/IP joke?
B: Yes, I'm ready to hear a TCP/IP joke.
A: Well, here is the TCP/IP joke.
A: Did you receive a TCP/IP joke?
B: Yes, I **did** receive a TCP/IP joke.
In java final keyword uses for below occasions.
In java final variables can't reassign, final classes can't extends and final methods can't override.
On newer versions of yum, this information is stored in the "yumdb" when the package is installed. This is the only 100% accurate way to get the information, and you can use:
yumdb search from_repo repoid
(or repoquery and grep -- don't grep yum output). However the command "find-repos-of-install" was part of yum-utils for a while which did the best guess without that information:
http://james.fedorapeople.org/yum/commands/find-repos-of-install.py
As floyd said, a lot of repos. include a unique "dist" tag in their release, and you can look for that ... however from what you said, I guess that isn't the case for you?
The problem with solutions based on sums of numbers is they don't take into account the cost of storing and working with numbers with large exponents... in practice, for it to work for very large n, a big numbers library would be used. We can analyse the space utilisation for these algorithms.
We can analyse the time and space complexity of sdcvvc and Dimitris Andreou's algorithms.
Storage:
l_j = ceil (log_2 (sum_{i=1}^n i^j))
l_j > log_2 n^j (assuming n >= 0, k >= 0)
l_j > j log_2 n \in \Omega(j log n)
l_j < log_2 ((sum_{i=1}^n i)^j) + 1
l_j < j log_2 (n) + j log_2 (n + 1) - j log_2 (2) + 1
l_j < j log_2 n + j + c \in O(j log n)`
So l_j \in \Theta(j log n)
Total storage used: \sum_{j=1}^k l_j \in \Theta(k^2 log n)
Space used: assuming that computing a^j
takes ceil(log_2 j)
time, total time:
t = k ceil(\sum_i=1^n log_2 (i)) = k ceil(log_2 (\prod_i=1^n (i)))
t > k log_2 (n^n + O(n^(n-1)))
t > k log_2 (n^n) = kn log_2 (n) \in \Omega(kn log n)
t < k log_2 (\prod_i=1^n i^i) + 1
t < kn log_2 (n) + 1 \in O(kn log n)
Total time used: \Theta(kn log n)
If this time and space is satisfactory, you can use a simple recursive algorithm. Let b!i be the ith entry in the bag, n the number of numbers before removals, and k the number of removals. In Haskell syntax...
let
-- O(1)
isInRange low high v = (v >= low) && (v <= high)
-- O(n - k)
countInRange low high = sum $ map (fromEnum . isInRange low high . (!)b) [1..(n-k)]
findMissing l low high krange
-- O(1) if there is nothing to find.
| krange=0 = l
-- O(1) if there is only one possibility.
| low=high = low:l
-- Otherwise total of O(knlog(n)) time
| otherwise =
let
mid = (low + high) `div` 2
klow = countInRange low mid
khigh = krange - klow
in
findMissing (findMissing low mid klow) (mid + 1) high khigh
in
findMising 1 (n - k) k
Storage used: O(k)
for list, O(log(n))
for stack: O(k + log(n))
This algorithm is more intuitive, has the same time complexity, and uses less space.
txtText.Font = new Font("Segoe UI", 8,FontStyle.Bold);
//Font(Font Name,Font Size,Font.Style)
Different languages do different things with different levels of efficiency.
The Benchmarks Game has a whole load of different programming problems implemented in a lot of different languages.
I think atompub REST API is a great example of a restful service. See the snippet below from the atompub spec:
POST /edit/ HTTP/1.1
Host: example.org
User-Agent: Thingio/1.0
Authorization: Basic ZGFmZnk6c2VjZXJldA==
Content-Type: application/atom+xml;type=entry
Content-Length: nnn
Slug: First Post
<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom">
<title>Atom-Powered Robots Run Amok</title>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<author><name>John Doe</name></author>
<content>Some text.</content>
</entry>
The server signals a successful creation with a status code of 201. The response includes a Location header indicating the Member Entry URI of the Atom Entry, and a representation of that Entry in the body of the response.
HTTP/1.1 201 Created
Date: Fri, 7 Oct 2005 17:17:11 GMT
Content-Length: nnn
Content-Type: application/atom+xml;type=entry;charset="utf-8"
Location: http://example.org/edit/first-post.atom
ETag: "c180de84f991g8"
<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom">
<title>Atom-Powered Robots Run Amok</title>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<author><name>John Doe</name></author>
<content>Some text.</content>
<link rel="edit"
href="http://example.org/edit/first-post.atom"/>
</entry>
The Entry created and returned by the Collection might not match the Entry POSTed by the client. A server MAY change the values of various elements in the Entry, such as the atom:id, atom:updated, and atom:author values, and MAY choose to remove or add other elements and attributes, or change element content and attribute values.
We are using DBS of 50 GB+ on our platform. no complains works great. Make sure you are doing everything right! Are you using predefined statements ? *SQLITE 3.7.3
Apply these settings (right after you create the DB)
PRAGMA main.page_size = 4096;
PRAGMA main.cache_size=10000;
PRAGMA main.locking_mode=EXCLUSIVE;
PRAGMA main.synchronous=NORMAL;
PRAGMA main.journal_mode=WAL;
PRAGMA main.cache_size=5000;
Hope this will help others, works great here
Also, the way strings are implemented (different for each platform) and when you start stitching them together. Like using a StringBuilder
. It allocats a buffer for you to copy into, once you reach the end, it allocates even more memory for you, in the hopes that if you do a large concatenation performance won't be hindered.
Maybe Jon Skeet can help up out here?
In addition to DESCRIBE TABLE
, you can use the command below
DESCRIBE INDEXES FOR TABLE *tablename* SHOW DETAIL
to get information about the table's indexes.
The most comprehensive detail about a table on Db2 for Linux, UNIX, and Windows can be obtained from the db2look utility, which you can run from a remote client or directly on the Db2 server as a local user. The tool produces the DDL and other information necessary to mimic tables and their statistical data. The docs for db2look in Db2 11.5 are here.
The following db2look
command will connect to the SALESDB database and obtain the DDL statements necessary to recreate the ORDERS table
db2look -d SALESDB -e -t ORDERS
In Python 3.2 and later, use
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='big')
2043455163
or
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='little')
3148270713
according to the endianness of your byte-string.
This also works for bytestring-integers of arbitrary length, and for two's-complement signed integers by specifying signed=True
. See the docs for from_bytes
.
You could also put this on one line like so:
/path/to/jre/bin/keytool -genkey -alias [mypassword] -keyalg [RSA]
Wanted to include this as a comment on piet.t answer but I don't have enough rep to comment.
See the "signing" section of this article that describes how to access the keytool.exe without changing your working directory to the path: https://flutter.dev/docs/deployment/android#signing-the-app
Note that they say you can type in space separated folder names like /"Program Files"/ with quotes but I found in bash i had to separate with back slashes like /Program\ Files/.
This answer applies to this question if
path below can be relative
PYTHONPATH=/path/to/dir python script.py
You should be able to utilize the StringUtils
class and the countMatches()
method.
public static int countMatches(String str, String sub)
Counts how many times the substring appears in the larger String.
Try the following:
int count = StringUtils.countMatches("a.b.c.d", ".");
It's the directory where java
was run from, where you started the JVM. Does not have to be within the user's home directory. It can be anywhere where the user has permission to run java.
So if you cd
into /somedir
, then run your program, user.dir
will be /somedir
.
A different property, user.home
, refers to the user directory. As in /Users/myuser
or /home/myuser
or C:\Users\myuser
.
See here for a list of system properties and their descriptions.
jQuery UI Datepicker is coded to always highlight the user's local date using the class ui-state-highlight
. There is no built-in option to change this.
One method, described similarly in other answers to related questions, is to override the CSS for that class to match ui-state-default
of your theme, for example:
.ui-state-highlight {
border: 1px solid #d3d3d3;
background: #e6e6e6 url(images/ui-bg_glass_75_e6e6e6_1x400.png) 50% 50% repeat-x;
color: #555555;
}
However this isn't very helpful if you are using dynamic themes, or if your intent is to highlight a different day (e.g., to have "today" be based on your server's clock rather than the client's).
An alternative approach is to override the datepicker prototype that is responsible for highlighting the current day.
Assuming that you are using a minimized version of the UI javascript, the following snippets can address these concerns.
If your goal is to prevent highlighting the current day altogether:
// copy existing _generateHTML method
var _generateHTML = jQuery.datepicker.constructor.prototype._generateHTML;
// remove the string "ui-state-highlight"
_generateHtml.toString().replace(' ui-state-highlight', '');
// and replace the prototype method
eval('jQuery.datepicker.constructor.prototype._generateHTML = ' + _generateHTML);
This changes the relevant code (unminimized for readability) from:
[...](printDate.getTime() == today.getTime() ? ' ui-state-highlight' : '') + [...]
to
[...](printDate.getTime() == today.getTime() ? '' : '') + [...]
If your goal is to change datepicker's definition of "today":
var useMyDateNotYours = '07/28/2014';
// copy existing _generateHTML method
var _generateHTML = jQuery.datepicker.constructor.prototype._generateHTML;
// set "today" to your own Date()-compatible date
_generateHTML.toString().replace('new Date,', 'new Date(useMyDateNotYours),');
// and replace the prototype method
eval('jQuery.datepicker.constructor.prototype._generateHTML = ' + _generateHTML);
This changes the relevant code (unminimized for readability) from:
[...]var today = new Date();[...]
to
[...]var today = new Date(useMyDateNotYours);[...]
// Note that in the minimized version, the line above take the form `L=new Date,`
// (part of a list of variable declarations, and Date is instantiated without parenthesis)
Instead of useMyDateNotYours
you could of course also instead inject a string, function, or whatever suits your needs.
<input type="file"
id="fileUploadButton"
style={{ display: 'none' }}
onChange={onFileChange}
/>
<label htmlFor={'fileUploadButton'}>
<Button
color="secondary"
className={classes.btnUpload}
variant="contained"
component="span"
startIcon={
<SvgIcon fontSize="small">
<UploadIcon />
</SvgIcon>
}
>
Upload
</Button>
</label>
Make sure Button has component="span", that helped me.
To start recording your device’s screen, run the following command:
adb shell screenrecord /sdcard/example.mp4
This command will start recording your device’s screen using the default settings and save the resulting video to a file at /sdcard/example.mp4 file on your device.
When you’re done recording, press Ctrl+C in the Command Prompt window to stop the screen recording. You can then find the screen recording file at the location you specified. Note that the screen recording is saved to your device’s internal storage, not to your computer.
The default settings are to use your device’s standard screen resolution, encode the video at a bitrate of 4Mbps, and set the maximum screen recording time to 180 seconds. For more information about the command-line options you can use, run the following command:
adb shell screenrecord --help
This works without rooting the device. Hope this helps.
As mentioned in Security Considerations section of Marshal documentation,
If you need to deserialize untrusted data, use JSON or another serialization format that is only able to load simple, ‘primitive’ types such as String, Array, Hash, etc.
Here is an example on how to do cloning using JSON in Ruby:
require "json"
original = {"John"=>"Adams","Thomas"=>"Jefferson","Johny"=>"Appleseed"}
cloned = JSON.parse(JSON.generate(original))
# Modify original hash
original["John"] << ' Sandler'
p original
#=> {"John"=>"Adams Sandler", "Thomas"=>"Jefferson", "Johny"=>"Appleseed"}
# cloned remains intact as it was deep copied
p cloned
#=> {"John"=>"Adams", "Thomas"=>"Jefferson", "Johny"=>"Appleseed"}
I had to benchmark the different ways. I shall comment on my findings but, in short, the fastest way is to use a plain old BufferedInputStream over a FileInputStream. If many files must be read then three threads will reduce the total execution time to roughly half, but adding more threads will progressively degrade performance until making it take three times longer to complete with twenty threads than with just one thread.
The assumption is that you must read a file and do something meaningful with its contents. In the examples here is reading lines from a log and count the ones which contain values that exceed a certain threshold. So I am assuming that the one-liner Java 8 Files.lines(Paths.get("/path/to/file.txt")).map(line -> line.split(";"))
is not an option.
I tested on Java 1.8, Windows 7 and both SSD and HDD drives.
I wrote six different implementations:
rawParse: Use BufferedInputStream over a FileInputStream and then cut lines reading byte by byte. This outperformed any other single-thread approach, but it may be very inconvenient for non-ASCII files.
lineReaderParse: Use a BufferedReader over a FileReader, read line by line, split lines by calling String.split(). This is approximatedly 20% slower that rawParse.
lineReaderParseParallel: This is the same as lineReaderParse, but it uses several threads. This is the fastest option overall in all cases.
nioFilesParse: Use java.nio.files.Files.lines()
nioAsyncParse: Use an AsynchronousFileChannel with a completion handler and a thread pool.
nioMemoryMappedParse: Use a memory-mapped file. This is really a bad idea yielding execution times at least three times longer than any other implementation.
These are the average times for reading 204 files of 4 MB each on an quad-core i7 and SSD drive. The files are generated on the fly to avoid disk caching.
rawParse 11.10 sec
lineReaderParse 13.86 sec
lineReaderParseParallel 6.00 sec
nioFilesParse 13.52 sec
nioAsyncParse 16.06 sec
nioMemoryMappedParse 37.68 sec
I found a difference smaller than I expected between running on an SSD or an HDD drive being the SSD approximately 15% faster. This may be because the files are generated on an unfragmented HDD and they are read sequentially, therefore the spinning drive can perform nearly as an SSD.
I was surprised by the low performance of the nioAsyncParse implementation. Either I have implemented something in the wrong way or the multi-thread implementation using NIO and a completion handler performs the same (or even worse) than a single-thread implementation with the java.io API. Moreover the asynchronous parse with a CompletionHandler is much longer in lines of code and tricky to implement correctly than a straight implementation on old streams.
Now the six implementations followed by a class containing them all plus a parametrizable main() method that allows to play with the number of files, file size and concurrency degree. Note that the size of the files varies plus minus 20%. This is to avoid any effect due to all the files being of exactly the same size.
rawParse
public void rawParse(final String targetDir, final int numberOfFiles) throws IOException, ParseException {
overrunCount = 0;
final int dl = (int) ';';
StringBuffer lineBuffer = new StringBuffer(1024);
for (int f=0; f<numberOfFiles; f++) {
File fl = new File(targetDir+filenamePreffix+String.valueOf(f)+".txt");
FileInputStream fin = new FileInputStream(fl);
BufferedInputStream bin = new BufferedInputStream(fin);
int character;
while((character=bin.read())!=-1) {
if (character==dl) {
// Here is where something is done with each line
doSomethingWithRawLine(lineBuffer.toString());
lineBuffer.setLength(0);
}
else {
lineBuffer.append((char) character);
}
}
bin.close();
fin.close();
}
}
public final void doSomethingWithRawLine(String line) throws ParseException {
// What to do for each line
int fieldNumber = 0;
final int len = line.length();
StringBuffer fieldBuffer = new StringBuffer(256);
for (int charPos=0; charPos<len; charPos++) {
char c = line.charAt(charPos);
if (c==DL0) {
String fieldValue = fieldBuffer.toString();
if (fieldValue.length()>0) {
switch (fieldNumber) {
case 0:
Date dt = fmt.parse(fieldValue);
fieldNumber++;
break;
case 1:
double d = Double.parseDouble(fieldValue);
fieldNumber++;
break;
case 2:
int t = Integer.parseInt(fieldValue);
fieldNumber++;
break;
case 3:
if (fieldValue.equals("overrun"))
overrunCount++;
break;
}
}
fieldBuffer.setLength(0);
}
else {
fieldBuffer.append(c);
}
}
}
lineReaderParse
public void lineReaderParse(final String targetDir, final int numberOfFiles) throws IOException, ParseException {
String line;
for (int f=0; f<numberOfFiles; f++) {
File fl = new File(targetDir+filenamePreffix+String.valueOf(f)+".txt");
FileReader frd = new FileReader(fl);
BufferedReader brd = new BufferedReader(frd);
while ((line=brd.readLine())!=null)
doSomethingWithLine(line);
brd.close();
frd.close();
}
}
public final void doSomethingWithLine(String line) throws ParseException {
// Example of what to do for each line
String[] fields = line.split(";");
Date dt = fmt.parse(fields[0]);
double d = Double.parseDouble(fields[1]);
int t = Integer.parseInt(fields[2]);
if (fields[3].equals("overrun"))
overrunCount++;
}
lineReaderParseParallel
public void lineReaderParseParallel(final String targetDir, final int numberOfFiles, final int degreeOfParalelism) throws IOException, ParseException, InterruptedException {
Thread[] pool = new Thread[degreeOfParalelism];
int batchSize = numberOfFiles / degreeOfParalelism;
for (int b=0; b<degreeOfParalelism; b++) {
pool[b] = new LineReaderParseThread(targetDir, b*batchSize, b*batchSize+b*batchSize);
pool[b].start();
}
for (int b=0; b<degreeOfParalelism; b++)
pool[b].join();
}
class LineReaderParseThread extends Thread {
private String targetDir;
private int fileFrom;
private int fileTo;
private DateFormat fmt = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
private int overrunCounter = 0;
public LineReaderParseThread(String targetDir, int fileFrom, int fileTo) {
this.targetDir = targetDir;
this.fileFrom = fileFrom;
this.fileTo = fileTo;
}
private void doSomethingWithTheLine(String line) throws ParseException {
String[] fields = line.split(DL);
Date dt = fmt.parse(fields[0]);
double d = Double.parseDouble(fields[1]);
int t = Integer.parseInt(fields[2]);
if (fields[3].equals("overrun"))
overrunCounter++;
}
@Override
public void run() {
String line;
for (int f=fileFrom; f<fileTo; f++) {
File fl = new File(targetDir+filenamePreffix+String.valueOf(f)+".txt");
try {
FileReader frd = new FileReader(fl);
BufferedReader brd = new BufferedReader(frd);
while ((line=brd.readLine())!=null) {
doSomethingWithTheLine(line);
}
brd.close();
frd.close();
} catch (IOException | ParseException ioe) { }
}
}
}
nioFilesParse
public void nioFilesParse(final String targetDir, final int numberOfFiles) throws IOException, ParseException {
for (int f=0; f<numberOfFiles; f++) {
Path ph = Paths.get(targetDir+filenamePreffix+String.valueOf(f)+".txt");
Consumer<String> action = new LineConsumer();
Stream<String> lines = Files.lines(ph);
lines.forEach(action);
lines.close();
}
}
class LineConsumer implements Consumer<String> {
@Override
public void accept(String line) {
// What to do for each line
String[] fields = line.split(DL);
if (fields.length>1) {
try {
Date dt = fmt.parse(fields[0]);
}
catch (ParseException e) {
}
double d = Double.parseDouble(fields[1]);
int t = Integer.parseInt(fields[2]);
if (fields[3].equals("overrun"))
overrunCount++;
}
}
}
nioAsyncParse
public void nioAsyncParse(final String targetDir, final int numberOfFiles, final int numberOfThreads, final int bufferSize) throws IOException, ParseException, InterruptedException {
ScheduledThreadPoolExecutor pool = new ScheduledThreadPoolExecutor(numberOfThreads);
ConcurrentLinkedQueue<ByteBuffer> byteBuffers = new ConcurrentLinkedQueue<ByteBuffer>();
for (int b=0; b<numberOfThreads; b++)
byteBuffers.add(ByteBuffer.allocate(bufferSize));
for (int f=0; f<numberOfFiles; f++) {
consumerThreads.acquire();
String fileName = targetDir+filenamePreffix+String.valueOf(f)+".txt";
AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get(fileName), EnumSet.of(StandardOpenOption.READ), pool);
BufferConsumer consumer = new BufferConsumer(byteBuffers, fileName, bufferSize);
channel.read(consumer.buffer(), 0l, channel, consumer);
}
consumerThreads.acquire(numberOfThreads);
}
class BufferConsumer implements CompletionHandler<Integer, AsynchronousFileChannel> {
private ConcurrentLinkedQueue<ByteBuffer> buffers;
private ByteBuffer bytes;
private String file;
private StringBuffer chars;
private int limit;
private long position;
private DateFormat frmt = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
public BufferConsumer(ConcurrentLinkedQueue<ByteBuffer> byteBuffers, String fileName, int bufferSize) {
buffers = byteBuffers;
bytes = buffers.poll();
if (bytes==null)
bytes = ByteBuffer.allocate(bufferSize);
file = fileName;
chars = new StringBuffer(bufferSize);
frmt = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
limit = bufferSize;
position = 0l;
}
public ByteBuffer buffer() {
return bytes;
}
@Override
public synchronized void completed(Integer result, AsynchronousFileChannel channel) {
if (result!=-1) {
bytes.flip();
final int len = bytes.limit();
int i = 0;
try {
for (i = 0; i < len; i++) {
byte by = bytes.get();
if (by=='\n') {
// ***
// The code used to process the line goes here
chars.setLength(0);
}
else {
chars.append((char) by);
}
}
}
catch (Exception x) {
System.out.println(
"Caught exception " + x.getClass().getName() + " " + x.getMessage() +
" i=" + String.valueOf(i) + ", limit=" + String.valueOf(len) +
", position="+String.valueOf(position));
}
if (len==limit) {
bytes.clear();
position += len;
channel.read(bytes, position, channel, this);
}
else {
try {
channel.close();
}
catch (IOException e) {
}
consumerThreads.release();
bytes.clear();
buffers.add(bytes);
}
}
else {
try {
channel.close();
}
catch (IOException e) {
}
consumerThreads.release();
bytes.clear();
buffers.add(bytes);
}
}
@Override
public void failed(Throwable e, AsynchronousFileChannel channel) {
}
};
FULL RUNNABLE IMPLEMENTATION OF ALL CASES
https://github.com/sergiomt/javaiobenchmark/blob/master/FileReadBenchmark.java
I wasn't able to fix the problem OpenCV either, but a video4linux (V4L2) workaround does work with OpenCV when using Linux. At least, it does on my Raspberry Pi with Rasbian and my cheap webcam. This is not as solid, light and portable as you'd like it to be, but for some situations it might be very useful nevertheless.
Make sure you have the v4l2-ctl application installed, e.g. from the Debian v4l-utils package. Than run (before running the python application, or from within) the command:
v4l2-ctl -d /dev/video1 -c exposure_auto=1 -c exposure_auto_priority=0 -c exposure_absolute=10
It overwrites your camera shutter time to manual settings and changes the shutter time (in ms?) with the last parameter to (in this example) 10. The lower this value, the darker the image.
Always keep things easy and simple by creating a class
.bcolor{ background:#F00; }
THEN USE THE addClass() & removeClass() to finish it up
The rowSums function (as Greg mentions) will do what you want, but you are mixing subsetting techniques in your answer, do not use "$" when using "[]", your code should look something more like:
data$new <- rowSums( data[,43:167] )
If you want to use a function other than sum, then look at ?apply for applying general functions accross rows or columns.
You can also try this pure CSS method:
font-size: calc(100% - 0.3em);
Well, technically any such object will wind up being built over a C-style thread library because C++ only just specified a stock std::thread
model in c++0x, which was just nailed down and hasn't yet been implemented. The problem is somewhat systemic, technically the existing c++ memory model isn't strict enough to allow for well defined semantics for all of the 'happens before' cases. Hans Boehm wrote an paper on the topic a while back and was instrumental in hammering out the c++0x standard on the topic.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html
That said there are several cross-platform thread C++ libraries that work just fine in practice. Intel thread building blocks contains a tbb::thread object that closely approximates the c++0x standard and Boost has a boost::thread library that does the same.
http://www.threadingbuildingblocks.org/
http://www.boost.org/doc/libs/1_37_0/doc/html/thread.html
Using boost::thread you'd get something like:
#include <boost/thread.hpp>
void task1() {
// do stuff
}
void task2() {
// do stuff
}
int main (int argc, char ** argv) {
using namespace boost;
thread thread_1 = thread(task1);
thread thread_2 = thread(task2);
// do other stuff
thread_2.join();
thread_1.join();
return 0;
}
Bootstrap 4.x answer
hidden-*
classes are removed from Bootstrap 4 beta onward.
If you want to show on medium and up use the d-*
classes, e.g.:
<div class="d-none d-md-block">This will show in medium and up</div>
If you want to show only in small and below use this:
<div class="d-block d-md-none"> This will show only in below medium form factors</div>
Screen size and class chart
| Screen Size | Class |
|--------------------|--------------------------------|
| Hidden on all | .d-none |
| Hidden only on xs | .d-none .d-sm-block |
| Hidden only on sm | .d-sm-none .d-md-block |
| Hidden only on md | .d-md-none .d-lg-block |
| Hidden only on lg | .d-lg-none .d-xl-block |
| Hidden only on xl | .d-xl-none |
| Visible on all | .d-block |
| Visible only on xs | .d-block .d-sm-none |
| Visible only on sm | .d-none .d-sm-block .d-md-none |
| Visible only on md | .d-none .d-md-block .d-lg-none |
| Visible only on lg | .d-none .d-lg-block .d-xl-none |
| Visible only on xl | .d-none .d-xl-block |
Rather than using explicit
.visible-*
classes, you make an element visible by simply not hiding it at that screen size. You can combine one.d-*-none
class with one.d-*-block
class to show an element only on a given interval of screen sizes (e.g..d-none.d-md-block.d-xl-none
shows the element only on medium and large devices).
You need to set it in the value itself, not in the prepared statement SQL string.
So, this should do for a prefix-match:
notes = notes
.replace("!", "!!")
.replace("%", "!%")
.replace("_", "!_")
.replace("[", "![");
PreparedStatement pstmt = con.prepareStatement(
"SELECT * FROM analysis WHERE notes LIKE ? ESCAPE '!'");
pstmt.setString(1, notes + "%");
or a suffix-match:
pstmt.setString(1, "%" + notes);
or a global match:
pstmt.setString(1, "%" + notes + "%");
You could also just use the old fashion 'strtok'
http://www.cplusplus.com/reference/clibrary/cstring/strtok/
Its a bit wonky but doesn't involve using boost (not that boost is a bad thing).
You basically call strtok with the string you want to split and the delimiter (in this case a space) and it will return you a char*.
From the link:
#include <stdio.h>
#include <string.h>
int main ()
{
char str[] ="- This, a sample string.";
char * pch;
printf ("Splitting string \"%s\" into tokens:\n",str);
pch = strtok (str," ,.-");
while (pch != NULL)
{
printf ("%s\n",pch);
pch = strtok (NULL, " ,.-");
}
return 0;
}
To get the index
values as a list
/list
of tuple
s for Index
/MultiIndex
do:
df.index.values.tolist() # an ndarray method, you probably shouldn't depend on this
or
list(df.index.values) # this will always work in pandas
The most important part is that you must override both intrinsicContentSize()
and drawTextInRect()
in order to account for AutoLayout:
var contentInset: UIEdgeInsets = .zero {
didSet {
setNeedsDisplay()
}
}
override public var intrinsicContentSize: CGSize {
let size = super.intrinsicContentSize
return CGSize(width: size.width + contentInset.left + contentInset.right, height: size.height + contentInset.top + contentInset.bottom)
}
override public func drawText(in rect: CGRect) {
super.drawText(in: UIEdgeInsetsInsetRect(rect, contentInset))
}
function toFixed(num, fixed) {
var re = new RegExp('^-?\\d+(?:\.\\d{0,' + (fixed || -1) + '})?');
return num.toString().match(re)[0];
}
As floating point math in javascript will always have edge cases, the previous solution will be accurate most of the time which is not good enough.
There are some solutions to this like num.toPrecision
, BigDecimal.js, and accounting.js.
Yet, I believe that merely parsing the string will be the simplest and always accurate.
Basing the update on the well written regex from the accepted answer by @Gumbo, this new toFixed function will always work as expected.
Roll your own toFixed function:
function toFixed(num, fixed) {
fixed = fixed || 0;
fixed = Math.pow(10, fixed);
return Math.floor(num * fixed) / fixed;
}
Happy coding!
If you get an error on Internet Explorer, Change the security settings to allow ActiveX
var CallBackFunction = function(content) {
alert(content);
}
ReadFileAllBrowsers(document.getElementById("file_upload"), CallBackFunction);
//Tested in Mozilla Firefox browser, Chrome
function ReadFileAllBrowsers(FileElement, CallBackFunction) {
try {
var file = FileElement.files[0];
var contents_ = "";
if (file) {
var reader = new FileReader();
reader.readAsText(file, "UTF-8");
reader.onload = function(evt) {
CallBackFunction(evt.target.result);
}
reader.onerror = function(evt) {
alert("Error reading file");
}
}
} catch (Exception) {
var fall_back = ieReadFile(FileElement.value);
if (fall_back != false) {
CallBackFunction(fall_back);
}
}
}
///Reading files with Internet Explorer
function ieReadFile(filename) {
try {
var fso = new ActiveXObject("Scripting.FileSystemObject");
var fh = fso.OpenTextFile(filename, 1);
var contents = fh.ReadAll();
fh.Close();
return contents;
} catch (Exception) {
alert(Exception);
return false;
}
}
Your code is blanking out your file:
import csv
workingdir = "C:\Mer\Ven\sample"
csvfile = workingdir+"\test3.csv"
f=open(csvfile,'wb') # opens file for writing (erases contents)
csv.writer(f, delimiter =' ',quotechar =',',quoting=csv.QUOTE_MINIMAL)
if you want to read the file in, you will need to use csv.reader and open the file for reading.
import csv
workingdir = "C:\Mer\Ven\sample"
csvfile = workingdir+"\test3.csv"
f=open(csvfile,'rb') # opens file for reading
reader = csv.reader(f)
for line in reader:
print line
If you want to write that back out to a new file with different delimiters, you can create a new file and specify those delimiters and write out each line (instead of printing the tuple).
Here is a extension method, this allows coding like the SQL IN command.
public static bool In<T>(this T o, params T[] values)
{
if (values == null) return false;
return values.Contains(o);
}
public static bool In<T>(this T o, IEnumerable<T> values)
{
if (values == null) return false;
return values.Contains(o);
}
This allows stuff like that:
List<int> ints = new List<int>( new[] {1,5,7});
int i = 5;
bool isIn = i.In(ints);
Or:
int i = 5;
bool isIn = i.In(1,2,3,4,5);
Another one way:
Interface:
-(NSString *) stringByStrippingHTML:(NSString*)inputString;
Implementation
(NSString *) stringByStrippingHTML:(NSString*)inputString
{
NSAttributedString *attrString = [[NSAttributedString alloc] initWithData:[inputString dataUsingEncoding:NSUTF8StringEncoding] options:@{NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType,NSCharacterEncodingDocumentAttribute: @(NSUTF8StringEncoding)} documentAttributes:nil error:nil];
NSString *str= [attrString string];
//you can add here replacements as your needs:
[str stringByReplacingOccurrencesOfString:@"[" withString:@""];
[str stringByReplacingOccurrencesOfString:@"]" withString:@""];
[str stringByReplacingOccurrencesOfString:@"\n" withString:@""];
return str;
}
Realization
cell.exampleClass.text = [self stringByStrippingHTML:[exampleJSONParsingArray valueForKey: @"key"]];
or simple
NSString *myClearStr = [self stringByStrippingHTML:rudeStr];
The each
function iterates over an array, calling the supplied function once per element, and setting this
to the active element. This:
function countdown() {
alert(this + "..");
}
$([5, 4, 3, 2, 1]).each(countdown);
will alert 5..
then 4..
then 3..
then 2..
then 1..
Map on the other hand takes an array, and returns a new array with each element changed by the function. This:
function squared() {
return this * this;
}
var s = $([5, 4, 3, 2, 1]).map(squared);
would result in s being [25, 16, 9, 4, 1]
.
This feature is known as generics. http://msdn.microsoft.com/en-us/library/512aeb7t(v=vs.100).aspx
An example of this is to make a collection of items of a specific type.
class MyArray<T>
{
T[] array = new T[10];
public T GetItem(int index)
{
return array[index];
}
}
In your code, you could then do something like this:
MyArray<int> = new MyArray<int>();
In this case, T[] array
would work like int[] array
, and public T GetItem
would work like public int GetItem
.
If upload_file
is meant to be the file, use:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
and requests
will send a multi-part form POST body with the upload_file
field set to the contents of the file.txt
file.
The filename will be included in the mime header for the specific field:
>>> import requests
>>> open('file.txt', 'wb') # create an empty demo file
<_io.BufferedWriter name='file.txt'>
>>> files = {'upload_file': open('file.txt', 'rb')}
>>> print(requests.Request('POST', 'http://example.com', files=files).prepare().body.decode('ascii'))
--c226ce13d09842658ffbd31e0563c6bd
Content-Disposition: form-data; name="upload_file"; filename="file.txt"
--c226ce13d09842658ffbd31e0563c6bd--
Note the filename="file.txt"
parameter.
You can use a tuple for the files
mapping value, with between 2 and 4 elements, if you need more control. The first element is the filename, followed by the contents, and an optional content-type header value and an optional mapping of additional headers:
files = {'upload_file': ('foobar.txt', open('file.txt','rb'), 'text/x-spam')}
This sets an alternative filename and content type, leaving out the optional headers.
If you are meaning the whole POST body to be taken from a file (with no other fields specified), then don't use the files
parameter, just post the file directly as data
. You then may want to set a Content-Type
header too, as none will be set otherwise. See Python requests - POST data from a file.
Bash has select for this purpose.
select result in Yes No Cancel
do
echo $result
done
For the folder name and drive, you can use:
echo %~dp0
You can get a lot more information using different modifiers:
%~I - expands %I removing any surrounding quotes (")
%~fI - expands %I to a fully qualified path name
%~dI - expands %I to a drive letter only
%~pI - expands %I to a path only
%~nI - expands %I to a file name only
%~xI - expands %I to a file extension only
%~sI - expanded path contains short names only
%~aI - expands %I to file attributes of file
%~tI - expands %I to date/time of file
%~zI - expands %I to size of file
The modifiers can be combined to get compound results:
%~dpI - expands %I to a drive letter and path only
%~nxI - expands %I to a file name and extension only
%~fsI - expands %I to a full path name with short names only
This is a copy paste from the "for /?" command on the prompt. Hope it helps.
Top 10 DOS Batch tips (Yes, DOS Batch...) shows batchparams.bat (link to source as a gist):
C:\Temp>batchparams.bat c:\windows\notepad.exe
%~1 = c:\windows\notepad.exe
%~f1 = c:\WINDOWS\NOTEPAD.EXE
%~d1 = c:
%~p1 = \WINDOWS\
%~n1 = NOTEPAD
%~x1 = .EXE
%~s1 = c:\WINDOWS\NOTEPAD.EXE
%~a1 = --a------
%~t1 = 08/25/2005 01:50 AM
%~z1 = 17920
%~$PATHATH:1 =
%~dp1 = c:\WINDOWS\
%~nx1 = NOTEPAD.EXE
%~dp$PATH:1 = c:\WINDOWS\
%~ftza1 = --a------ 08/25/2005 01:50 AM 17920 c:\WINDOWS\NOTEPAD.EXE
Use element.classList.add to add a class:
element.classList.add("my-class");
And element.classList.remove to remove a class:
element.classList.remove("my-class");
Add a space plus the name of your new class to the className
property of the element. First, put an id
on the element so you can easily get a reference.
<div id="div1" class="someclass">
<img ... id="image1" name="image1" />
</div>
Then
var d = document.getElementById("div1");
d.className += " otherclass";
Note the space before otherclass
. It's important to include the space otherwise it compromises existing classes that come before it in the class list.
See also element.className on MDN.
The "cd" command changes the directory, but not what drive you are working with. So when you go "cd d:\temp", you are changing the D drive's directory to temp, but staying in the C drive.
Execute these two commands:
D:
cd temp
That will get you the results you want.
FWIW:
Neither of the other suggestions worked for me. I had previously created a project with the same name which I then deleted. I recreated the base source-files (using PhoneGap) - which doesn't create the "eclipse"-project. I then tried to create an Android project using existing source files, but it failed with the same error message as the original question implies.
The solution for me was to move the source-folder and files out of the workspace, and use the same option, but this time check the option for copying the files into the workspace in the wizard.
I have another case when I have no chance to put CDATA into the xml as I receive the string HTML from a server.
Here is what I get from a server:
<p>The quick brown <br />
fox jumps <br />
over the lazy dog<br />
</p>
It seems to be more complicated but the solution is much simpler.
private TextView textView;
protected void onCreate(Bundle savedInstanceState) {
.....
textView = (TextView) findViewById(R.id.text); //need to define in your layout
String htmlFromServer = getHTMLContentFromAServer();
textView.setText(Html.fromHtml(htmlFromServer).toString());
}
Hope it helps!
Linh
Actually, we're now in globalized world of 21st century and people no longer communicate using ASCII only so when anwering question about "is it letters only" you need to take into account letters from non-ASCII alphabets as well. Python has a pretty cool unicodedata library which among other things allows categorization of Unicode characters:
unicodedata.category('?')
'Lo'
unicodedata.category('A')
'Lu'
unicodedata.category('1')
'Nd'
unicodedata.category('a')
'Ll'
The categories and their abbreviations are defined in the Unicode standard. From here you can quite easily you can come up with a function like this:
def only_letters(s):
for c in s:
cat = unicodedata.category(c)
if cat not in ('Ll','Lu','Lo'):
return False
return True
And then:
only_letters('Bzdrezylo')
True
only_letters('He7lo')
False
As you can see the whitelisted categories can be quite easily controlled by the tuple inside the function. See this article for a more detailed discussion.
The arrange() in dplyr is my favorite option. Use the pipe operator and go from least important to most important aspect
dd1 <- dd %>%
arrange(z) %>%
arrange(desc(x))
For completion, in ES6 we've got classes (supported at the time of writing this only by latest browsers, but available in Babel, TypeScript and other transpilers)
class Foo {
constructor(){
this.a = 5;
this.b = 6;
this.c = this.a + this.b;
}
}
const foo = new Foo();
This worked for me
In JSP, to view the image
<img src="${pageContext.request.contextPath}/resources/images/slide-are.jpg">
In dispatcher-servlet.xml
<mvc:annotation-driven />
<mvc:resources mapping="/resources/**" location="/WEB-INF/resources/" />
Here's a variation I had to solve that's worth sharing: for each unique string in columnA
I wanted to find the most common associated string in columnB
.
df.groupby('columnA').agg({'columnB': lambda x: x.mode().any()}).reset_index()
The .any()
picks one if there's a tie for the mode. (Note that using .any()
on a Series of int
s returns a boolean rather than picking one of them.)
For the original question, the corresponding approach simplifies to
df.groupby('columnA').columnB.agg('max').reset_index()
.
As explained here, in the context of main
both return
and exit
do the same thing
Q: Why do we need to return
or exit
?
A: To indicate execution status.
In your example even if you didnt have return or exit statements the code would run fine (Assuming everything else is syntactically,etc-ally correct. Also, if (and it should be) main
returns int
you need that return 0
at the end).
But, after execution you don't have a way to find out if your code worked as expected.
You can use the return code of the program (In *nix environments , using $?
) which gives you the code (as set by exit
or return
) . Since you set these codes yourself you understand at which point the code reached before terminating.
You can write return 123
where 123
indicates success in the post execution checks.
Usually, in *nix environments 0
is taken as success and non-zero codes as failures.
In oracle db there is a trick for casting int to float (I suppose, it should also work in mysql):
select myintfield + 0.0 as myfloatfield from mytable
While @Heximal's answer works, I don't personally recommend it.
This is because it uses implicit casting. Although you didn't type CAST
, either the SUM()
or the 0.0
need to be cast to be the same data-types, before the +
can happen. In this case the order of precedence is in your favour, and you get a float on both sides, and a float as a result of the +
. But SUM(aFloatField) + 0
does not yield an INT, because the 0
is being implicitly cast to a FLOAT.
I find that in most programming cases, it is much preferable to be explicit. Don't leave things to chance, confusion, or interpretation.
If you want to be explicit, I would use the following.
CAST(SUM(sl.parts) AS FLOAT) * cp.price
-- using MySQL CAST FLOAT requires 8.0
You can try the following to see what happens...
CAST(SUM(sl.parts) AS NUMERIC(10,4)) * CAST(cp.price AS NUMERIC(10,4))
Well, first of all, in Python there is no !
operator, that'd be not
. But open
would not fail silently either - it would throw an exception. And the blocks need to be indented properly - Python uses whitespace to indicate block containment.
Thus we get:
fn = input('Enter file name: ')
try:
file = open(fn, 'r')
except IOError:
file = open(fn, 'w')
SSLSocketFactory does not expose its X509TrustManager, which is a field that OkHttp needs to build a clean certificate chain. This method instead must use reflection to extract the trust manager. Applications should prefer to call sslSocketFactory(SSLSocketFactory, X509TrustManager), which avoids such reflection.
Source: OkHttp documentation
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.sslSocketFactory(sslContext.getSocketFactory(),
new X509TrustManager() {
@Override
public void checkClientTrusted(java.security.cert.X509Certificate[] chain, String authType) throws CertificateException {
}
@Override
public void checkServerTrusted(java.security.cert.X509Certificate[] chain, String authType) throws CertificateException {
}
@Override
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return new java.security.cert.X509Certificate[]{};
}
});
This works in IE without even having to declare a var for the table and will delete all rows:
for(var i = 0; i < resultsTable.rows.length;)
{
resultsTable.deleteRow(i);
}
To concatenate all the project manager names from projects that have multiple project managers write:
SELECT a.project_id,a.project_name,Stuff((SELECT N'/ ' + first_name + ', '+last_name FROM projects_v
where a.project_id=project_id
FOR
XML PATH(''),TYPE).value('text()[1]','nvarchar(max)'),1,2,N''
) mgr_names
from projects_v a
group by a.project_id,a.project_name
I had the same problem. I tried lot of ways but at last solution was simple. Solution: Open IIS, In Application Pools, right click on the .net framework that is being used. Go to settings and change 'Enable 32-Bit Applications' to 'True'.
This is the code that I use in my BATCH files
```
@echo off
set param=%~1
set tempfile=__temp__.txt
dir /b/ad > %tempfile%
set isfolder=false
for /f "delims=" %%i in (temp.txt) do if /i "%%i"=="%param%" set isfolder=true
del %tempfile%
echo %isfolder%
if %isfolder%==true echo %param% is a directory
```
Use the std::getline()
from <string>
.
istream & getline(istream & is,std::string& str)
So, for your case it would be:
std::getline(read,x);
This type of issue generally occurs while migrating one server to another, one folder to another. Laravel keeps the cache and configuration (file name ) when the folder is different then this problem occurs.
Solution Run Following command:
php artisan config:cache
https://laravel.com/docs/5.6/configuration#configuration-caching
This is with reference to the original question
$('select').val(['a','c']);
$('select').trigger('change');
As described by Crist Clark.
CLOSE_WAIT means that the local end of the connection has received a FIN from the other end, but the OS is waiting for the program at the local end to actually close its connection.
The problem is your program running on the local machine is not closing the socket. It is not a TCP tuning issue. A connection can (and quite correctly) stay in CLOSE_WAIT forever while the program holds the connection open.
Once the local program closes the socket, the OS can send the FIN to the remote end which transitions you to LAST_ACK while you wait for the ACK of the FIN. Once that is received, the connection is finished and drops from the connection table (if your end is in CLOSE_WAIT you do not end up in the TIME_WAIT state).
SQL Server does simple counting of cascade paths and, rather than trying to work out whether any cycles actually exist, it assumes the worst and refuses to create the referential actions (CASCADE): you can and should still create the constraints without the referential actions. If you can't alter your design (or doing so would compromise things) then you should consider using triggers as a last resort.
FWIW resolving cascade paths is a complex problem. Other SQL products will simply ignore the problem and allow you to create cycles, in which case it will be a race to see which will overwrite the value last, probably to the ignorance of the designer (e.g. ACE/Jet does this). I understand some SQL products will attempt to resolve simple cases. Fact remains, SQL Server doesn't even try, plays it ultra safe by disallowing more than one path and at least it tells you so.
Microsoft themselves advises the use of triggers instead of FK constraints.
According to this code golfing solution by Gaffi, the following works:
a = Split(StrConv(s, 64), Chr(0))
Understand that .children
is a property of an Element. 1 Only Elements have .children
, and these children are all of type Element. 2
However, .childNodes
is a property of Node. .childNodes
can contain any node. 3
A concrete example would be:
let el = document.createElement("div");
el.textContent = "foo";
el.childNodes.length === 1; // Contains a Text node child.
el.children.length === 0; // No Element children.
Most of the time, you want to use .children
because generally you don't want to loop over Text or Comment nodes in your DOM manipulation.
If you do want to manipulate Text nodes, you probably want .textContent
instead. 4
1. Technically, it is an attribute of ParentNode, a mixin included by Element.
2. They are all elements because .children
is a HTMLCollection, which can only contain elements.
3. Similarly, .childNodes
can hold any node because it is a NodeList.
4. Or .innerText
. See the differences here or here.
I'd recommend looking at Jeff Richter's Power Threading Library and specifically the IAsyncEnumerator. Take a look at the video on Charlie Calvert's blog where Richter goes over it for a good overview.
Don't be put off by the name because it makes asynchronous programming tasks easier to code.
In some cases for some security reasons the unit would be marked as masked. This state is much stronger than being disabled in which you cannot even start the service manually.
to check this, run the following command:
systemctl list-unit-files | grep mongod
if you find out something like this:
mongod.service masked
then you can unmask the unit by:
sudo systemctl unmask mongod
then you may want to start the service:
sudo systemctl start mongod
and to enable auto-start during system boot:
sudo systemctl enable mongod
However if mongodb did not start again or was not masked at all, you have the option to reinstall it this way:
sudo apt-get --purge mongo*
sudo apt-get install mongodb-org
thanks to @jehanzeb-malik
What if your number is 2.39999999999999. I suppose you want to get the exact decimal value. Then use BigDecimal:
Integer x,y,intPart;
BigDecimal bd,bdInt,bdDec;
bd = new BigDecimal("2.39999999999999");
intPart = bd.intValue();
bdInt = new BigDecimal(intPart);
bdDec = bd.subtract(bdInt);
System.out.println("Number : " + bd);
System.out.println("Whole number part : " + bdInt);
System.out.println("Decimal number part : " + bdDec);
function loadData(callback) {
//execute other requirement
if(callback && typeof callback == "function"){
callback();
}
}
loadData(function(){
//execute callback
});
On Vista you can use IFileDialog with FOS_PICKFOLDERS option set. That will cause display of OpenFileDialog-like window where you can select folders:
var frm = (IFileDialog)(new FileOpenDialogRCW());
uint options;
frm.GetOptions(out options);
options |= FOS_PICKFOLDERS;
frm.SetOptions(options);
if (frm.Show(owner.Handle) == S_OK) {
IShellItem shellItem;
frm.GetResult(out shellItem);
IntPtr pszString;
shellItem.GetDisplayName(SIGDN_FILESYSPATH, out pszString);
this.Folder = Marshal.PtrToStringAuto(pszString);
}
For older Windows you can always resort to trick with selecting any file in folder.
Working example that works on .NET Framework 2.0 and later can be found here.
you can convert the integer dataframe into string first and then do the operation e.g.
df3['nID']=df3['nID'].astype(str)
grp = df3.groupby('userID')['nID'].aggregate(lambda x: '->'.join(tuple(x)))
If you want to create a hive table(which does not exist) from a dataframe (some times it fails to create with
DataFrameWriter.saveAsTable
).StructType.toDDL
will helps in listing the columns as a string.
val df = ...
val schemaStr = df.schema.toDDL # This gives the columns
spark.sql(s"""create table hive_table ( ${schemaStr})""")
//Now write the dataframe to the table
df.write.saveAsTable("hive_table")
hive_table
will be created in default space since we did not provide any database at spark.sql()
. stg.hive_table
can be used to create hive_table
in stg
database.
We can use the new fs.promises API
const fs = require('fs').promises;
(async() => {
const stat = await fs.lstat('test.txt');
console.log(stat.isFile());
})().catch(console.error)
Here's how you would detect if a path is a file or a directory asynchronously, which is the recommended approach in node. using fs.lstat
const fs = require("fs");
let path = "/path/to/something";
fs.lstat(path, (err, stats) => {
if(err)
return console.log(err); //Handle error
console.log(`Is file: ${stats.isFile()}`);
console.log(`Is directory: ${stats.isDirectory()}`);
console.log(`Is symbolic link: ${stats.isSymbolicLink()}`);
console.log(`Is FIFO: ${stats.isFIFO()}`);
console.log(`Is socket: ${stats.isSocket()}`);
console.log(`Is character device: ${stats.isCharacterDevice()}`);
console.log(`Is block device: ${stats.isBlockDevice()}`);
});
Note when using the synchronous API:
When using the synchronous form any exceptions are immediately thrown. You can use try/catch to handle exceptions or allow them to bubble up.
try{
fs.lstatSync("/some/path").isDirectory()
}catch(e){
// Handle error
if(e.code == 'ENOENT'){
//no such file or directory
//do something
}else {
//do something else
}
}
I would advise against using """
for multi line comments!
Here is a simple example to highlight what might be considered an unexpected behavior:
print('{}\n{}'.format(
'I am a string',
"""
Some people consider me a
multi-line comment, but
"""
'clearly I am also a string'
)
)
Now have a look at the output:
I am a string
Some people consider me a
multi-line comment, but
clearly I am also a string
The multi line string was not treated as comment, but it was concatenated with 'clearly I'm also a string'
to form a single string.
If you want to comment multiple lines do so according to PEP 8 guidelines:
print('{}\n{}'.format(
'I am a string',
# Some people consider me a
# multi-line comment, but
'clearly I am also a string'
)
)
Output:
I am a string
clearly I am also a string
Protected data members can be accessed by any classes that inherit from your class. Private data members, however, cannot. Let's say we have the following:
class MyClass {
private:
int myPrivateMember; // lol
protected:
int myProtectedMember;
};
From within your extension to this class, referencing this.myPrivateMember
won't work. However, this.myProtectedMember
will. The value is still encapsulated, so if we have an instantiation of this class called myObj
, then myObj.myProtectedMember
won't work, so it is similar in function to a private data member.
If you installed Python 2.7, I think you could use pip2
and pip2.7
to install packages specifically for Python 2, like
pip2 install some_pacakge
or
pip2.7 install some_package
And you may use pip3
or pip3.5
to install pacakges specifically for Python 3.
It's called an initialization list. An initializer list is how you pass arguments to your member variables' constructors and for passing arguments to the parent class's constructor.
If you use =
to assign in the constructor body, first the default constructor is called, then the assignment operator is called. This is a bit wasteful, and sometimes there's no equivalent assignment operator.
In this case a[4]
is the 5th
integer in the array a
, ap
is a pointer to integer, so you are assigning an integer to a pointer and that's the warning.
So ap
now holds 45
and when you try to de-reference it (by doing *ap
) you are trying to access a memory at address 45, which is an invalid address, so your program crashes.
You should do ap = &(a[4]);
or ap = a + 4;
In c
array names decays to pointer, so a
points to the 1st element of the array.
In this way, a
is equivalent to &(a[0])
.
And another solution is Google reflections.
Quick review:
Sure you can.
ALTER TABLE myTable ALTER COLUMN myColumn int NULL
Just substitute int for whatever datatype your column is.
Sample of the Recursive Level:
DECLARE @VALUE_CODE AS VARCHAR(5);
--SET @VALUE_CODE = 'A' -- Specify a level
WITH ViewValue AS
(
SELECT ValueCode
, ValueDesc
, PrecedingValueCode
FROM ValuesTable
WHERE PrecedingValueCode IS NULL
UNION ALL
SELECT A.ValueCode
, A.ValueDesc
, A.PrecedingValueCode
FROM ValuesTable A
INNER JOIN ViewValue V ON
V.ValueCode = A.PrecedingValueCode
)
SELECT ValueCode, ValueDesc, PrecedingValueCode
FROM ViewValue
--WHERE PrecedingValueCode = @VALUE_CODE -- Specific level
--WHERE PrecedingValueCode IS NULL -- Root
Make sure mode_rewrite is enabled in APACHE settings. See link here https://github.com/h5bp/server-configs-apache/wiki/How-to-enable-Apache-modules
Then make sure you have correct .htaccess https://wordpress.org/support/topic/404-errors-with-permalinks-set-to-postname/
And correct virtual host settings in either Apache settings How to Set AllowOverride all
Something like this - for each line read into string variable a
:
>>> a = "123,456"
>>> b = a.split(",")
>>> b
['123', '456']
>>> c = [int(e) for e in b]
>>> c
[123, 456]
>>> x, y = c
>>> x
123
>>> y
456
Now you can do what is necessary with x
and y
as assigned, which are integers.
string1.equals(string2)
is right way to do it.
String s = "something", t = "maybe something else";
if (s == t) // Legal, but usually results WRONG.
if (s.equals(t)) // RIGHT way to check the two strings
/* == will fail in following case:*/
String s1 = new String("abc");
String s2 = new String("abc");
if(s1==s2) //it will return false
DateTime is a value type, which means it always has some value.
It's like an integer - it can be 0, or 1, or less than zero, but it can never be "nothing".
If you want a DateTime that can take the value Nothing, use a Nullable DateTime.
You can use backslash to quote "funny" characters in your jQuery selectors:
$('#input\\[23\\]')
For attribute values, you can use quotes:
$('input[name="weirdName[23]"]')
Now, I'm a little confused by your example; what exactly does your HTML look like? Where does the string "inputName" show up, in particular?
edit fixed bogosity; thanks @Dancrumb
Iif you want a particular <canvas id="canvasID">
to be always transparent you just have to set
#canvasID{
opacity:0.5;
}
Instead, if you want some particular elements inside the canvas area to be transparent, you have to set transparency when you draw, i.e.
context.fillStyle = "rgba(0, 0, 200, 0.5)";
Although if you want a code that is compatible with both python2 and python3 you can use this:
import logging
try:
1/0
except Exception as e:
if hasattr(e, 'message'):
logging.warning('python2')
logging.error(e.message)
else:
logging.warning('python3')
logging.error(e)
I think an ad-hoc way would be to use the environment variable PYTHONPATH
as described in the documentation: Python2, Python3
# Linux & OSX
export PYTHONPATH=$HOME/dirWithScripts/:$PYTHONPATH
# Windows
set PYTHONPATH=C:\path\to\dirWithScripts\;%PYTHONPATH%
Make Theme.AppCompat style parent
<style name="AppTheme" parent="Theme.AppCompat">
<item name="android:colorPrimary">#005555</item>
<item name="android:colorPrimaryDark">#003333</item>
</style>
And put getSupportActionBar().getThemedContext()
in onCreate()
.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_my);
getSupportActionBar().getThemedContext();
}
I was almost using the same technology as you did, however I was using my app to connect an Exchange Server via Office 365 platform on WinForms. I too had the same issue as you did, but was able to accomplish by using code which has slight modification of what others have given above.
SmtpClient client = new SmtpClient(exchangeServer, 587);
client.Credentials = new System.Net.NetworkCredential(username, password);
client.EnableSsl = true;
client.Send(msg);
I had to use the Port 587, which is of course the default port over TSL and the did the authentication.
You can return json in PHP this way:
header('Content-Type: application/json');
echo json_encode(array('foo' => 'bar'));
exit;
I'm Daniel Stenberg.
I founded the curl project back in 1998, I wrote the initial curl version and I created libcurl. I've written more than half of all the 24,000 commits done in the source code repository up to this point in time. I'm still the lead developer of the project. To a large extent, curl is my baby.
I shipped the first version of curl as open source since I wanted to "give back" to the open source world that had given me so much code already. I had used so much open source and I wanted to be as cool as the other open source authors.
Thanks to it being open source, literally thousands of people have been able to help us out over the years and have improved the products, the documentation. the web site and just about every other detail around the project. curl and libcurl would never have become the products that they are today were they not open source. The list of contributors now surpass 1900 names and currently the list grows with a few hundred names per year.
Thanks to curl and libcurl being open source and liberally licensed, they were immediately adopted in numerous products and soon shipped by operating systems and Linux distributions everywhere thus getting a reach beyond imagination.
Thanks to them being "everywhere", available and liberally licensed they got adopted and used everywhere and by everyone. It created a defacto transfer library standard.
At an estimated six billion installations world wide, we can safely say that curl is the most widely used internet transfer library in the world. It simply would not have gone there had it not been open source. curl runs in billions of mobile phones, a billion Windows 10 installations, in a half a billion games and several hundred million TVs - and more.
Should I have released it with proprietary license instead and charged users for it? It never occured to me, and it wouldn't have worked because I would never had managed to create this kind of stellar project on my own. And projects and companies wouldn't have used it.
Now, why do I and my fellow curl developers still continue to develop curl and give it away for free to the world?
Yes. So insanely much.
But I'm not satisfied with this and I'm not just leaning back, happy with what we've done. I keep working on curl every single day, to improve, to fix bugs, to add features and to make sure curl keeps being the number one file transfer solution for the world even going forward.
We do mistakes along the way. We make the wrong decisions and sometimes we implement things in crazy ways. But to win in the end and to conquer the world is about patience and endurance and constantly going back and reconsidering previous decisions and correcting previous mistakes. To continuously iterate, polish off rough edges and gradually improve over time.
Never give in. Never stop. Fix bugs. Add features. Iterate. To the end of time.
Yeah. For real.
Sure I get tired at times. Working on something every day for over twenty years isn't a paved downhill road. Sometimes there are obstacles. During times things are rough. Occasionally people are just as ugly and annoying as people can be.
But curl is my life's project and I have patience. I have thick skin and I don't give up easily. The tough times pass and most days are awesome. I get to hang out with awesome people and the reward is knowing that my code helps driving the Internet revolution everywhere is an ego boost above normal.
curl will never be "done" and so far I think work on curl is pretty much the most fun I can imagine. Yes, I still think so even after twenty years in the driver's seat. And as long as I think it's fun I intend to keep at it.
You can use @UtilityClass annotation from lombok https://projectlombok.org/features/experimental/UtilityClass
Gradle:
dependencies {
implementation 'com.google.code.gson:gson:2.8.5'
}
Maven:
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
Gson jar downloads are available from Maven Central.
Your table width is 90% which is relative to it's container.
If you squeeze the page, you are probably squeezing the table width as well. The width of the cells reduce too and the browser compensate by increasing the height.
To have the height untouched, you have to make sure the widths of the cells can hold the intented content. Fixing the table width is probably something you want to try. Or perhaps play around with the min-width of the table.
You could also execute tree > tree.txt
in both folders and then diff both tree.txt files with any file based diff tool (git diff).
To check if a folder exists or not, you can simply use the exists()
method:
// Create a File object representing the folder 'A/B'
def folder = new File( 'A/B' )
// If it doesn't exist
if( !folder.exists() ) {
// Create all folders up-to and including B
folder.mkdirs()
}
// Then, write to file.txt inside B
new File( folder, 'file.txt' ).withWriterAppend { w ->
w << "Some text\n"
}
For people who are using AppCompatActivity with Toolbar as white background. Do use this code.
Updated: December, 2017
<android.support.design.widget.AppBarLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:theme="@style/ThemeOverlay.AppCompat.Light">
<android.support.v7.widget.Toolbar
android:id="@+id/toolbar_edit"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
app:popupTheme="@style/AppTheme.AppBarOverlay"
app:title="Edit Your Profile"/>
</android.support.design.widget.AppBarLayout>
You will need to get the textbox via javascript when moving the mouse over it and change its type
to text
. And when moving it out, you will want to change it back to password
. No chance of doing this in pure CSS.
HTML:
<input type="password" name="password" id="myPassword" size="30" />
<img src="theicon" onmouseover="mouseoverPass();" onmouseout="mouseoutPass();" />
JS:
function mouseoverPass(obj) {
var obj = document.getElementById('myPassword');
obj.type = "text";
}
function mouseoutPass(obj) {
var obj = document.getElementById('myPassword');
obj.type = "password";
}
I faced the same problem, and I solved it in a tricky way. First make global variables, v1, v2, and v3. And in the onclick, send an indicator, 1, 2, or 3 and in the function check for 1, 2, 3 to put the v1, v2, and v3 like:
onclick="myfun(1)"
onclick="myfun(2)"
onclick="myfun(3)"
function myfun(var)
{
if (var ==1)
alert(v1);
if (var ==2)
alert(v2);
if (var ==3)
alert(v3);
}
The following should work for you:
[\n\r].*Object Name:\s*([^\n\r]*)
Your desired match will be in capture group 1.
[\n\r][ \t]*Object Name:[ \t]*([^\n\r]*)
Would be similar but not allow for things such as " blah Object Name: blah" and also make sure that not to capture the next line if there is no actual content after "Object Name:"
The documentation for Gerrit, in particular the "Push changes" section, explains that you push to the "magical refs/for/'branch'
ref using any Git client tool".
The following image is taken from the Intro to Gerrit. When you push to Gerrit, you do git push gerrit HEAD:refs/for/<BRANCH>
. This pushes your changes to the staging area (in the diagram, "Pending Changes"). Gerrit doesn't actually have a branch called <BRANCH>
; it lies to the git client.
Internally, Gerrit has its own implementation for the Git and SSH stacks. This allows it to provide the "magical" refs/for/<BRANCH>
refs.
When a push request is received to create a ref in one of these namespaces Gerrit performs its own logic to update the database, and then lies to the client about the result of the operation. A successful result causes the client to believe that Gerrit has created the ref, but in reality Gerrit hasn’t created the ref at all. [Link - Gerrit, "Gritty Details"].
After a successful patch (i.e, the patch has been pushed to Gerrit, [putting it into the "Pending Changes" staging area], reviewed, and the review has passed), Gerrit pushes the change from the "Pending Changes" into the "Authoritative Repository", calculating which branch to push it into based on the magic it did when you pushed to refs/for/<BRANCH>
. This way, successfully reviewed patches can be pulled directly from the correct branches of the Authoritative Repository
.
There are multiple ways of doing that you can use either place
or grid
or even the pack
method.
Sample code:
from tkinter import *
root = Tk()
l = Label(root, text="hello" )
l.pack(padx=6, pady=4) # where padx and pady represent the x and y axis respectively
# well you can also use side=LEFT inside the pack method of the label widget.
To place a widget to on basis of columns and rows , use the grid method:
but = Button(root, text="hello" )
but.grid(row=0, column=1)
The sizzle selector engine (what powers JQuery) is perfectly geared up for this:
var elements = $('input[type=text]');
Or
var elements = $('input:text');
you have to be aware that when using interface builder
and creating a Xib
(nib) containing one cell that there is also a placeholder
created that points to the class that will be used. Meaning, when you place two UITableViewCell
in one Xib file you possibly run into the exact same trouble causing an *** Assertion failure ...
. The placeholder mechanism doesn't work adn will be confused then. Instead placing different placeholders in one Xib read on..
The most easy solution (even if that seems a bit to simple) is to place one Cell at a time in one Xib. IB will create a placeholder for you and all works as expected then. But this then leads directly to one extra line of code, because then you need to load the correct nib/xib asking for the reuseIdentified Cell where it resides in. So the following example code focuses the use of multiple Cell Indentifiers in one tableview where an Assertion Failure is very common.
// possibly above class implementation
static NSString *firstCellIdentifier = @"firstCellIdentifier";
static NSString *secondCellIdentifier = @"secondCellIdentifier";
// possibly in -(instancetype)init
UINib *firstNib = [UINib nibWithNibName:@"FirstCell" bundle:nil];
[self.tableView registerNib:firstNib forCellReuseIdentifier:firstCellIdentifier];
UINib *secondNib = [UINib nibWithNibName:@"SecondCell" bundle:nil];
[self.tableView registerNib:secondNib forCellReuseIdentifier:secondCellIdentifier];
Another trouble with the use of two CellIdentifier's in one UITableView is that row height and/or section height have to be taken care of. Two cells can have different heights of course.
When registering classes for reuse the code should look different.
The "simple solution" looks also much different when your cells reside inside a Storyboard instead of Xib's. Watch out for the placeholders.
Keep also in mind that interface builder files have version wise variations and need to be set to a version that your targeted OS version supports. Even if you may be lucky that the particular feature you placed in your Xib is not changed since the last IB version and does not throw errors yet. So a Xib made with IB set to be compatible to iOS 13+
but used in a target that is compiled on an earlier version iOS 12.4
will cause also trouble and can end up with Assertion failure.
Variable names in PHP start with $ so $entryId is the name of a variable. $this is a special variable in Object Oriented programming in PHP, which is reference to current object. -> is used to access an object member (like properties or methods) in PHP, like the syntax in C++. so your code means this:
Place the value of variable $entryId into the entryId field (or property) of this object.
The & operator in PHP, means pass reference. Here is a example:
$b=2;
$a=$b;
$a=3;
print $a;
print $b;
// output is 32
$b=2;
$a=&$b; // note the & operator
$a=3;
print $a;
print $b;
// output is 33
In the above code, because we used & operator, a reference to where $b is pointing is stored in $a. So $a is actually a reference to $b.
In PHP, arguments are passed by value by default (inspired by C). So when calling a function, when you pass in your values, they are copied by value not by reference. This is the default IN MOST SITUATIONS. However there is a way to have pass by reference behaviour, when defining a function. Example:
function plus_by_reference( &$param ) {
// what ever you do, will affect the actual parameter outside the function
$param++;
}
$a=2;
plus_by_reference( $a );
echo $a;
// output is 3
There are many built-in functions that behave like this. Like the sort() function that sorts an array will affect directly on the array and will not return another sorted array.
There is something interesting to note though. Because pass-by-value mode could result in more memory usage, and PHP is an interpreted language (so programs written in PHP are not as fast as compiled programs), to make the code run faster and minimize memory usage, there are some tweaks in the PHP interpreter. One is lazy-copy (I'm not sure about the name). Which means this:
When you are coping a variable into another, PHP will copy a reference to the first variable into the second variable. So your new variable, is actually a reference to the first one until now. The value is not copied yet. But if you try to change any of these variables, PHP will make a copy of the value, and then changes the variable. This way you will have the opportunity to save memory and time, IF YOU DO NOT CHANGE THE VALUE.
So:
$b=3;
$a=$b;
// $a points to $b, equals to $a=&$b
$b=4;
// now PHP will copy 3 into $a, and places 4 into $b
After all this, if you want to place the value of $entryId into 'entryId' property of your object, the above code will do this, and will not copy the value of entryId, until you change any of them, results in less memory usage. If you actually want them both to point to the same value, then use this:
$this->entryId=&$entryId;
substr() would be best, you'll also want to check the length of the string first
$str = 'someLongString';
$max = 7;
if(strlen($str) > $max) {
$str = substr($str, 0, $max) . '...';
}
wordwrap won't trim the string down, just split it up...
I have had a similar requirement. After a lot of testing I found a simple but not very elegant solution (not sure if it will work for you?)...
After my macro refresh's the data that Excel is getting, I added into my macro the line "Calculate" (normally used to recalculate the workbook if you have set calculation to manual).
While I don't need to do do this, it appears by adding this in, Excel waits while the data is refreshed before continuing with the rest of my macro.
The form's "on submit" handlers are called before the form is submitted. I don't know if there is a handler to be called after the form is submited. In the traditional non-Javascript sense the form submission will reload the page.
just replacing
using (WebApp.Start(url))
with
using (WebApp.Start<Startup>(url))
solved my problem. The class named Startup was already implemented. as mentioned above by @robthedev
This should work.
SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT]
FROM RES_DATA a INNER JOIN INV_DATA b
ON a.[CUSTOMER ID]=b.[CUSTOMER ID]
GROUP BY a.[CUSTOMER ID], a.[NAME]
I tested it with SQL Fiddle against SQL Server 2008: http://sqlfiddle.com/#!3/1cad5/1
Basically what's happening here is that, because of the join, you are getting the same row on the "left" (i.e. from the RES_DATA
table) for every row on the "right" (i.e. the INV_DATA
table) that has the same [CUSTOMER ID]
value. When you group by just the columns on the left side, and then do a sum of just the [AMOUNT]
column from the right side, it keeps the one row intact from the left side, and sums up the matching values from the right side.
Mount remount the /
Eg.
mount -o remount,rw /dev/xyz /
sed -i 's/1 1/0 0/' /etc/fstab
sed -i 's/1 2/0 0/' /etc/fstab
I believe you need to put the libeay32.dll and ssleay32.dll files in the systems folder
How about this approach:
We don't require additional end point for refreshing the token in this case. Would appreciate any feedack.
As of the date of this answer, GitHub has not (or shall I say no longer?) this feature in the web interface. You can, however, ask [email protected]
to add your vote for that.
In the meantime, GitHub user bardiharborow has created a tool to do just this: https://upriver.github.io/
Source is here: https://github.com/upriver/upriver.github.io
You can also add the following line to your build script:
println "Running gradle version: $gradle.gradleVersion"
or (it won't be printed with -q
switch)
logger.lifecycle "Running gradle version: $gradle.gradleVersion"
You can create a subclass of Application and store your shared object there. The Application object should exist for the lifetime of your app as long as there is some active component.
From your activities, you can access the application object via getApplication()
.
read() is equivalent to recv() with a flags parameter of 0. Other values for the flags parameter change the behaviour of recv(). Similarly, write() is equivalent to send() with flags == 0.
Yes,that is called short-circuiting.
Please take a look at this wikipedia page on short-circuiting
<?php
// create a "numeric" array
$animals = array('monitor', 'cpu', 'mouse', 'ram', 'wifi', 'usb', 'pendrive');
//Normarl display
print_r($animals);
echo "<br/><br/>";
//If splice the array
//array_splice($animals, 2, 2);
unset($animals[3]); // you can unset the particular value
print_r($animals);
?>
You can use database from here -
http://myip.ms/info/cities_sql_database/
CREATE TABLE `cities` (
`cityID` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`cityName` varchar(50) NOT NULL,
`stateID` smallint(5) unsigned NOT NULL DEFAULT '0',
`countryID` varchar(3) NOT NULL DEFAULT '',
`language` varchar(10) NOT NULL DEFAULT '',
`latitude` double NOT NULL DEFAULT '0',
`longitude` double NOT NULL DEFAULT '0',
PRIMARY KEY (`cityID`),
UNIQUE KEY `unq` (`countryID`,`stateID`,`cityID`),
KEY `cityName` (`cityName`),
KEY `stateID` (`stateID`),
KEY `countryID` (`countryID`),
KEY `latitude` (`latitude`),
KEY `longitude` (`longitude`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Here is the link to documentation INSERT ... ON DUPLICATE Statement.
Starting with MongoDB v3.4 (released in 2016), the $lookup
aggregation pipeline stage can also work directly with an array. There is no need for $unwind
any more.
This was tracked in SERVER-22881.
Something new to think about looking forward to Object.observe in ES7 and with data-binding in general. Consider:
var foo={
name: "hello"
};
Object.observe(foo, function(){alert('modified');}); // bind to foo
foo={}; // You are no longer bound to foo but to an orphaned version of it
foo.name="there"; // This change will be missed by Object.observe()
So under that circumstance #2 can be the best choice.
For the given json format as in file ~/my-app/src/db/abc.json:
[
{
"name":"Ankit",
"id":1
},
{
"name":"Aditi",
"id":2
},
{
"name":"Avani",
"id":3
}
]
inorder to import to .js file like ~/my-app/src/app.js:
const json = require("./db/abc.json");
class Arena extends React.Component{
render(){
return(
json.map((user)=>
{
return(
<div>{user.name}</div>
)
}
)
}
);
}
}
export default Arena;
Output:
Ankit Aditi Avani
If you're actually building a real system, then yes, you'd typically just use the stuff in the standard library if what you need is available there. That said, don't think of this as a pointless exercise. It's good to understand how things work, and understanding linked lists is an important step towards understanding more complex data structures, many of which don't exist in the standard libraries.
There are some differences between the way you're creating a linked list and the way the Java collections API does it. The Collections API is trying to adhere to a more complicated interface. The Collections API linked list is also a doubly linked list, while you're building a singly linked list. What you're doing is more appropriate for a class assignment.
With your LinkedList
class, an instance will always be a list of at least one element. With this kind of setup you'd use null
for when you need an empty list.
Think of next
as being "the rest of the list". In fact, many similar implementations use the name "tail" instead of "next".
Here's a diagram of a LinkedList
containing 3 elements:
Note that it's a LinkedList
object pointing to a word ("Hello") and a list of 2 elements. The list of 2 elements has a word ("Stack") and a list of 1 element. That list of 1 element has a word ("Overflow") and an empty list (null
). So you can treat next
as just another list that happens to be one element shorter.
You may want to add another constructor that just takes a String, and sets next to null
. This would be for creating a 1-element list.
To append, you check if next
is null
. If it is, create a new one element list and set next
to that.
next = new LinkedList(word);
If next isn't null
, then append to next
instead.
next.append(word);
This is the recursive approach, which is the least amount of code. You can turn that into an iterative solution which would be more efficient in Java*, and wouldn't risk a stack overflow with very long lists, but I'm guessing that level of complexity isn't needed for your assignment.
* Some languages have tail call elimination, which is an optimization that lets the language implementation convert "tail calls" (a call to another function as the very last step before returning) into (effectively) a "goto". This makes such code completely avoid using the stack, which makes it safer (you can't overflow the stack if you don't use the stack) and typically more efficient. Scheme is probably the most well known example of a language with this feature.
What I would do is do this:
<button type="submit" style="background-color:rgba(255,255,255,0.0); border:none;" id="resultButton" onclick="showResults();"><img src="images/search.png" /></button>
I used background-color:rgba(255,255,255,0.0); So that the original background color of a button goes away. The same with the border:none; it will take the original border away.
Add a line getchar();
or system("pause");
before your return 0;
in main function.
It will work for you.
The general theory can be found in wikipedia's article on Kademlia. The specific protocol specification used in bittorrent is here: http://wiki.theory.org/BitTorrentDraftDHTProtocol
Use a generator expression:
>>> s = "foo200bar"
>>> new_s = "".join(i for i in s if i in "0123456789")
This is actually really easy:
As others have noted the call to .remove()
is asynchronous. We should all be aware nothing happens 'instantly', even if it is at the speed of light.
What you mean by 'instantly' is that the next line of code should be able to execute after the call to .remove()
. With asynchronous operations the next line may be when the data has been removed, it may not - it is totally down to chance and the amount of time that has elapsed.
.remove()
takes one parameter a callback function to help deal with this situation to perform operations after we know that the operation has been completed (with or without an error). .push()
takes two params, a value and a callback just like .remove()
.
Here is your example code with modifications:
ref = new Firebase("myfirebase.com")
ref.push({key:val}, function(error){
//do stuff after push completed
});
// deletes all data pushed so far
ref.remove(function(error){
//do stuff after removal
});
Make sure that you have write permissions for that directory you were trying to create file by properties
you don't need to pass the entire encoded string to atob method, you need to split the encoded string and pass the required string to atob method
const token= "eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJob3NzYW0iLCJUb2tlblR5cGUiOiJCZWFyZXIiLCJyb2xlIjoiQURNSU4iLCJpc0FkbWluIjp0cnVlLCJFbXBsb3llZUlkIjoxLCJleHAiOjE2MTI5NDA2NTksImlhdCI6MTYxMjkzNzA1OX0.8f0EeYbGyxt9hjggYW1vR5hMHFVXL4ZvjTA6XgCCAUnvacx_Dhbu1OGh8v5fCsCxXQnJ8iAIZDIgOAIeE55LUw"
console.log(atob(token.split(".")[1]));
_x000D_
Here is an end to end solution I implemented for streaming Android microphone audio to a server for playback: Android AudioRecord to Server over UDP Playback Issues
server {
server_name example.com;
root /path/to/root;
location / {
# bla bla
}
location /demo {
alias /path/to/root/production/folder/here;
}
}
If you need to use try_files
inside /demo
you'll need to replace alias
with a root
and do a rewrite because of the bug explained here
C++11 added alias declarations, which are generalization of typedef
, allowing templates:
template <size_t N>
using Vector = Matrix<N, 1>;
The type Vector<3>
is equivalent to Matrix<3, 1>
.
In C++03, the closest approximation was:
template <size_t N>
struct Vector
{
typedef Matrix<N, 1> type;
};
Here, the type Vector<3>::type
is equivalent to Matrix<3, 1>
.
Simply best way from linux server
sudo mysql --defaults-file=/etc/mysql/debian.cnf -e 'use mysql;UPDATE user SET password=PASSWORD("snippetbucket-technologies") WHERE user="root";FLUSH PRIVILEGES;'
This way work for any linux server, I had 100% sure on Debian and Ubuntu you win.
You should use re.MatchObject.group(0)
. Like
imtag = re.match(r'<img.*?>', line).group(0)
Edit:
You also might be better off doing something like
imgtag = re.match(r'<img.*?>',line)
if imtag:
print("yo it's a {}".format(imgtag.group(0)))
to eliminate all the None
s.
I used the min attribute: min="' + (new Date()).toISOString().substring(0,10) + '"
Here's my code and it worked.
<input type="date" min="' + (new Date()).toISOString().substring(0,10) + '" id="' + invdateId + '" value="' + d.Invoice.Invoice_Date__c + '"/>
Application pools allow you to isolate your applications from one another, even if they are running on the same server. This way, if there is an error in one app, it won't take down other applications.
Additionally, applications pools allow you to separate different apps which require different levels of security.
Here's a good resource: IIS and ASP.NET: The Application Pool
Use StringBuilder
:
String str;
Char a, b, c;
a = 'i';
b = 'c';
c = 'e';
StringBuilder sb = new StringBuilder();
sb.append(a);
sb.append(b);
sb.append(c);
str = sb.toString();
One-liner:
new StringBuilder().append(a).append(b).append(c).toString();
Doing ""+a+b+c
gives:
new StringBuilder().append("").append(a).append(b).append(c).toString();
I asked some time ago related question.
HotelsCombined has an easy-to-access and useful service to download the data feed files with hotels. Not exactly API, but something you can get, parse and use. Here is how you do it:
If you are interested in details, you may find the sample Python code to filter CSV file to get hotels for a specific city here:
http://mikhail.io/2012/05/17/api-to-get-the-list-of-hotels/
Update:
Unfortunately, HotelsCombined.com has introduced the new regulations: they've restricted the access to data feeds by default. To get the access, a partner must submit some information on why one needs the data. The HC team will review it and then (maybe) will grant access.
Let us think what an interpreter actually is: it's a piece of code that stores values in memory and lets the instructions in a program that it interprets access those values by their names, which are specified inside these instructions. So, the big job of an interpreter is to shape the rules of how we should use the names in those instructions to access the values that the interpreter stores.
On encountering "my", the interpreter creates a lexical variable: a named value that the interpreter can access only while it executes a block, and only from within that syntactic block. On encountering "our", the interpreter makes a lexical alias of a package variable: it binds a name, which the interpreter is supposed from then on to process as a lexical variable's name, until the block is finished, to the value of the package variable with the same name.
The effect is that you can then pretend that you're using a lexical variable and bypass the rules of 'use strict' on full qualification of package variables. Since the interpreter automatically creates package variables when they are first used, the side effect of using "our" may also be that the interpreter creates a package variable as well. In this case, two things are created: a package variable, which the interpreter can access from everywhere, provided it's properly designated as requested by 'use strict' (prepended with the name of its package and two colons), and its lexical alias.
Sources:
Date
has the time part, so we only need to extract it from Date
I personally prefer the default format
parameter of the Date
when date and time needs to be separated instead of using the extra SimpleDateFormat
Date date = new Date()
String datePart = date.format("dd/MM/yyyy")
String timePart = date.format("HH:mm:ss")
println "datePart : " + datePart + "\ttimePart : " + timePart
The argument is of length zero takes places when you get an output as an integer of length 0 and not a NULL output.i.e., integer(0).
You can further verify my point by finding the class of your output-
>class(output)
"integer"
Previous answers were good and correct, but in my opinion, an extra names
parameter will make it perfect, and it should be the recommended way, especially when the csv has no headers
.
usecols
and names
parametersdf = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'])
or use header=None
to explicitly tells people that the csv
has no headers (anyway both lines are identical)
df = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'], header=None)
So that you can retrieve your data by
# with `names` parameter
df['colA']
df['colB']
instead of
# without `names` parameter
df[0]
df[1]
Based on read_csv, when names
are passed explicitly, then header
will be behaving like None
instead of 0
, so one can skip header=None
when names
exist.
I used the method below which really is a conundrum of all the ones above. I also found that onOptionsItemSelected is never activated.
mDrawerToggle.setDrawerIndicatorEnabled(false);
getSupportActionBar().setHomeButtonEnabled(true);
Toolbar toolbar = (Toolbar) findViewById(R.id.tool_bar);
if (toolbar != null) {
toolbar.setNavigationOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
onBackPressed();
}
});
}
If you don't want any dependency on Spring's HATEOAS or javax.*
namespace, use ServletUriComponentsBuilder
to get URI of current request:
import org.springframework.web.util.UriComponentsBuilder;
ServletUriComponentsBuilder.fromCurrentRequest();
ServletUriComponentsBuilder.fromCurrentRequestUri();
This solution worked for me
;(function($){
// your code
})(jQuery);
Move your code inside the closure and use $
instead of jQuery
I found the above solution in https://magento.stackexchange.com/questions/33348/uncaught-typeerror-undefined-is-not-a-function-when-using-a-jquery-plugin-in-ma
...after searching too much
Your EditText
should be wrapped in a TextInputLayout
<android.support.design.widget.TextInputLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:id="@+id/tilEmail">
<EditText
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:inputType="textEmailAddress"
android:ems="10"
android:id="@+id/etEmail"
android:hint="Email"
android:layout_marginTop="10dp"
/>
</android.support.design.widget.TextInputLayout>
To get an error message like you wanted, set error to TextInputLayout
TextInputLayout tilEmail = (TextInputLayout) findViewById(R.id.tilEmail);
if (error){
tilEmail.setError("Invalid email id");
}
You should add design support library dependency. Add this line in your gradle dependencies
compile 'com.android.support:design:22.2.0'
The problem I was having is that I was only using https for my GitHub account. I needed to make sure that my GitHub account was setup for ssh access and that GitHub and heroku were both using the same public keys. These are the steps I took:
Navigate to the ~/.ssh directory and delete the id_rsa and id_rsa.pub if they are there. I started with new keys, though it might not be necessary.
$ cd ~/.ssh
$ rm id_rsa id_rsa.pub
Login to heroku, create a new site and add your public keys:
$ heroku login
...
$ heroku create
$ heroku keys:add
$ git push heroku master
It's the line after that match that you're interesting in, right? In sed, that could be accomplished like so:
sed -n '/ABC/{n;p}' infile
Alternatively, grep's A option might be what you're looking for.
-A NUM, Print NUM lines of trailing context after matching lines.
For example, given the following input file:
foo
bar
baz
bash
bongo
You could use the following:
$ grep -A 1 "bar" file
bar
baz
$ sed -n '/bar/{n;p}' file
baz
Hope that helps.
We just have to use @
in front of the deprecated function. No need to change anything as mentioned in above posts. For example: if(!@session_is_registered("username")){ }
. Just put @
and problem is solved.
Set oShell = CreateObject ("WScript.Shell")
oShell.run "cmd.exe /C copy ""S:Claims\Sound.wav"" ""C:\WINDOWS\Media\Sound.wav"" "
Komodo is wonderful, and it runs on OS X; they have a free version, Komodo Edit.
UPDATE from 2015: I've switched to PHPStorm from Jetbrains, the same folks that built IntelliJ IDEA and Resharper. It's better. Not just better. It's well worth the money.
Using @joran's sample data,
ggplot(dat, aes(x=xx, fill=yy)) + geom_histogram(alpha=0.2, position="identity")
note that the default position of geom_histogram
is "stack."
see "position adjustment" of this page:
The Code part :
Image imProfile = new Image(getClass().getResourceAsStream("/img/profile128.png"));
ImageView profileImage=new ImageView(imProfile);
in a javafx maven:
Using Jquery you could do something like this:
$(document).ready(function() {
if ($("img").attr("src") != null)
{
if ($("img").attr("src").toString() == "")
{
$("img").attr("src", "images/default.jpg");
}
}
else
{
$("img").attr("src", "images/default.jpg");
}
});
Set the [Console]::OuputEncoding
as encoding whatever you want, and print out with [Console]::WriteLine
.
If powershell ouput method has a problem, then don't use it. It feels bit bad, but works like a charm :)
Run it under a different user name, using "runas" or by scheduling it under a different user in Windows Scheduled Tasks.
Stream is just an abstraction (or a wrapper) over a physical
stream of bytes. This physical
stream is called the base stream
. So there is always a base stream over which a stream wrapper is created and thus the wrapper is named after the base stream type ie FileStream
, MemoryStream
etc.
The advantage of the stream wrapper is that you get a unified api to interact with streams of any underlying type usb, file
etc.
Why would you treat data as stream - because data chunks are loaded on-demand, we can inspect/process the data as chunks rather than loading the entire data into memory. This is how most of the programs deal with big files, for eg encrypting an OS image file.
In short:
static_cast<>()
gives you a compile time checking ability, C-Style
cast doesn't.static_cast<>()
is more readable and can be spotted easily
anywhere inside a C++ source code, C_Style cast is'nt.More Explanation:
The static cast performs conversions between compatible types. It is similar to the C-style cast, but is more restrictive. For example, the C-style cast would allow an integer pointer to point to a char.
char c = 10; // 1 byte
int *p = (int*)&c; // 4 bytes
Since this results in a 4-byte pointer ( a pointer to 4-byte datatype) pointing to 1 byte of allocated memory, writing to this pointer will either cause a run-time error or will overwrite some adjacent memory.
*p = 5; // run-time error: stack corruption
In contrast to the C-style cast, the static cast will allow the compiler to check that the pointer and pointee data types are compatible, which allows the programmer to catch this incorrect pointer assignment during compilation.
int *q = static_cast<int*>(&c); // compile-time error
You can also check this page on more explanation on C++ casts : Click Here
The pattern matches all non-digit characters. This will restrict you to non-negative integers, but for your example it will be more than sufficient.
string input = "0, 10, 20, 30, 100, 200";
Regex.Split(input, @"\D+");
Some more alternative options because regexes (regi ?) are awesome!
Here is a Simple regex to do the job:
regex="[^/]*$"
Example (grep):
FP="/hello/world/my/file/path/hello_my_filename.log"
echo $FP | grep -oP "$regex"
#Or using standard input
grep -oP "$regex" <<< $FP
Example (awk):
echo $FP | awk '{match($1, "$regex",a)}END{print a[0]}
#Or using stardard input
awk '{match($1, "$regex",a)}END{print a[0]} <<< $FP
If you need a more complicated regex: For example your path is wrapped in a string.
StrFP="my string is awesome file: /hello/world/my/file/path/hello_my_filename.log sweet path bro."
#this regex matches a string not containing / and ends with a period
#then at least one word character
#so its useful if you have an extension
regex="[^/]*\.\w{1,}"
#usage
grep -oP "$regex" <<< $StrFP
#alternatively you can get a little more complicated and use lookarounds
#this regex matches a part of a string that starts with / that does not contain a /
##then uses the lazy operator ? to match any character at any amount (as little as possible hence the lazy)
##that is followed by a space
##this allows use to match just a file name in a string with a file path if it has an exntension or not
##also if the path doesnt have file it will match the last directory in the file path
##however this will break if the file path has a space in it.
regex="(?<=/)[^/]*?(?=\s)"
#to fix the above problem you can use sed to remove spaces from the file path only
## as a side note unfortunately sed has limited regex capibility and it must be written out in long hand.
NewStrFP=$(echo $StrFP | sed 's:\(/[a-z]*\)\( \)\([a-z]*/\):\1\3:g')
grep -oP "$regex" <<< $NewStrFP
Total solution with Regexes:
This function can give you the filename with or without extension of a linux filepath even if the filename has multiple "."s in it. It can also handle spaces in the filepath and if the file path is embedded or wrapped in a string.
#you may notice that the sed replace has gotten really crazy looking
#I just added all of the allowed characters in a linux file path
function Get-FileName(){
local FileString="$1"
local NoExtension="$2"
local FileString=$(echo $FileString | sed 's:\(/[a-zA-Z0-9\<\>\|\\\:\)\(\&\;\,\?\*]*\)\( \)\([a-zA-Z0-9\<\>\|\\\:\)\(\&\;\,\?\*]*/\):\1\3:g')
local regex="(?<=/)[^/]*?(?=\s)"
local FileName=$(echo $FileString | grep -oP "$regex")
if [[ "$NoExtension" != "" ]]; then
sed 's:\.[^\.]*$::g' <<< $FileName
else
echo "$FileName"
fi
}
## call the function with extension
Get-FileName "my string is awesome file: /hel lo/world/my/file test/path/hello_my_filename.log sweet path bro."
##call function without extension
Get-FileName "my string is awesome file: /hel lo/world/my/file test/path/hello_my_filename.log sweet path bro." "1"
If you have to mess with a windows path you can start with this one:
[^\\]*$
I'm using following code (in C#) to change the color of the NavigationBar:
NavigationController.NavigationBar.SetBackgroundImage (new UIImage (), UIBarMetrics.Default);
NavigationController.NavigationBar.SetBackgroundImage (new UIImage (), UIBarMetrics.LandscapePhone);
NavigationController.NavigationBar.BackgroundColor = UIColor.Green;
The trick is that you need to get rid of the default background image and then the color will appear.
Below code will only print files within directory and exclude directories within given directory while traversing.
#include <dirent.h>
#include <stdio.h>
#include <errno.h>
#include <sys/stat.h>
#include<string.h>
int main(void)
{
DIR *d;
struct dirent *dir;
char path[1000]="/home/joy/Downloads";
d = opendir(path);
char full_path[1000];
if (d)
{
while ((dir = readdir(d)) != NULL)
{
//Condition to check regular file.
if(dir->d_type==DT_REG){
full_path[0]='\0';
strcat(full_path,path);
strcat(full_path,"/");
strcat(full_path,dir->d_name);
printf("%s\n",full_path);
}
}
closedir(d);
}
return(0);
}
None of the existing answers worked for me with a single-line 10GB file. Grep runs out of memory even on a machine with 768 GB of RAM!
$ cat /proc/meminfo | grep MemTotal
MemTotal: 791236260 kB
$ ls -lh test.json
-rw-r--r-- 1 me all 9.2G Nov 18 15:54 test.json
$ grep -o '0,0,0,0,0,0,0,0,' test.json | wc -l
grep: memory exhausted
0
So I wrote a very simple Rust program to do it.
cargo install count_occurences
$ count_occurences '0,0,0,0,0,0,0,0,' test.json
99094198
It's a little slow (1 minute for 10GB), but at least it doesn't run out of memory!
Some dude has come up with a smart preprocessor idea in this post
Removing this line from my code solved my problem.
header("Content-Type: application/json; charset=UTF-8");
$('.checkbox').prop('checked',true);
$('.checkbox').prop('checked',false);
... works perfectly with jquery1.9.1
Generally if the installation went smoothly, it will create the desktop icons/folders. Maybe check the installation summary log to see if there's any underlying errors.
It should be located C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log(date stamp)\
If you are doing this just for a parameter coming into a Stored Procedure, you can use the following:
declare @badIndex int
set @badIndex = PatIndex('%[^0-9]%', @Param)
while @badIndex > 0
set @Param = Replace(@Param, Substring(@Param, @badIndex, 1), '')
set @badIndex = PatIndex('%[^0-9]%', @Param)
The flowchart below can be of help when choosing one of the various semantic HTML5 elements:
The following command gives you a list of all background processes in your session, along with the pid. You can then use it to kill the process.
jobs -l
Example usage:
$ sleep 300 &
$ jobs -l
[1]+ 31139 Running sleep 300 &
$ kill 31139
I am aware that this is a old post but I hope that this will still help somebody, with what I hope is a simple example:
background:
I had a many to many case: the same user is listed multiple times with multiple values and I wanted to Create a new record, hence UPDATE wouldn't make sense in my case and I needed to address a particular user just like I would do using a WHERE clause.
INSERT into MyTable(aUser,aCar)
value(User123,Mini)
By using this construct you actually target a specific user (user123,who has other records) so you don't really need a where clause, I reckon.
the output could be:
aUser aCar
user123 mini
user123 HisOtherCarThatWasThereBefore
If after install you need to run redis
on all time, just type in terminal:
redis-server &
Running redis using upstart on Ubuntu
I've been trying to understand how to setup systems from the ground up on Ubuntu. I just installed redis
onto the box and here's how I did it and some things to look out for.
To install:
sudo apt-get install redis-server
That will create a redis
user and install the init.d
script for it. Since upstart
is now the replacement for using init.d, I figure I should convert it to run using upstart
.
To disable the default init.d
script for redis
:
sudo update-rc.d redis-server disable
Then create /etc/init/redis-server.conf
with the following script:
description "redis server"
start on runlevel [23]
stop on shutdown
exec sudo -u redis /usr/bin/redis-server /etc/redis/redis.conf
respawn
What this is the script for upstart
to know what command to run to start the process. The last line also tells upstart
to keep trying to respawn if it dies.
One thing I had to change in /etc/redis/redis.conf
is daemonize yes
to daemonize no
. What happens if you don't change it then redis-server
will fork and daemonize itself, and the parent process goes away. When this happens, upstart
thinks that the process has died/stopped and you won't have control over the process from within upstart
.
Now you can use the following commands to control your redis-server
:
sudo start redis-server
sudo restart redis-server
sudo stop redis-server
Hope this was helpful!
use the below code.
alert(+(Math.round(number + "e+2") + "e-2"));
\r
is a carriage return character; it tells your terminal emulator to move the cursor at the start of the line.
The cursor is the position where the next characters will be rendered.
So, printing a \r
allows to override the current line of the terminal emulator.
Tom Zych figured why the output of your program is o world
while the \r
is at the end of the line and you don't print anything after that:
When your program exits, the shell prints the command prompt. The terminal renders it where you left the cursor. Your program leaves the cursor at the start of the line, so the command prompt partly overrides the line you printed. This explains why you seen your command prompt followed by o world
.
The online compiler you mention just prints the raw output to the browser. The browser ignores control characters, so the \r
has no effect.
See https://en.wikipedia.org/wiki/Carriage_return
Here is a usage example of \r
:
#include <stdio.h>
#include <unistd.h>
int main()
{
char chars[] = {'-', '\\', '|', '/'};
unsigned int i;
for (i = 0; ; ++i) {
printf("%c\r", chars[i % sizeof(chars)]);
fflush(stdout);
usleep(200000);
}
return 0;
}
It repeatedly prints the characters -
\
|
/
at the same position to give the illusion of a rotating |
in the terminal.
The C Programming Language (K&R) would have you check for null == ptr to avoid an accidental assignment.
Want some serious code? Here it is.
var exists = System.Diagnostics.Process.GetProcessesByName(System.IO.Path.GetFileNameWithoutExtension(System.Reflection.Assembly.GetEntryAssembly().Location)).Count() > 1;
This works for any application (any name) and will become true
if there is another instance running of the same application.
Edit: To fix your needs you can use either of these:
if (System.Diagnostics.Process.GetProcessesByName(System.IO.Path.GetFileNameWithoutExtension(System.Reflection.Assembly.GetEntryAssembly().Location)).Count() > 1) return;
from your Main method to quit the method... OR
if (System.Diagnostics.Process.GetProcessesByName(System.IO.Path.GetFileNameWithoutExtension(System.Reflection.Assembly.GetEntryAssembly().Location)).Count() > 1) System.Diagnostics.Process.GetCurrentProcess().Kill();
which will kill the currently loading process instantly.
You need to add a reference to System.Core.dll for the .Count()
extension method. Alternatively, you can use the .Length
property.
You could use LINQ:
var checkedButton = container.Controls.OfType<RadioButton>()
.FirstOrDefault(r => r.Checked);
Note that this requires that all of the radio buttons be directly in the same container (eg, Panel or Form), and that there is only one group in the container. If that is not the case, you could make List<RadioButton>
s in your constructor for each group, then write list.FirstOrDefault(r => r.Checked)
.
In versions of Gradle prior to 5, the test.single
system property can be used to specify a single test.
You can do gradle -Dtest.single=ClassUnderTestTest test
if you want to test single class or use regexp like gradle -Dtest.single=ClassName*Test test
you can find more examples of filtering classes for tests under this link.
Gradle 5 removed this option, as it was superseded by test filtering using the --tests
command line option.
When the directory is deleted, the inode for that directory (and the inodes for its contents) are recycled. The pointer your shell has to that directory's inode (and its contents's inodes) are now no longer valid. When the directory is restored from backup, the old inodes are not (necessarily) reused; the directory and its contents are stored on random inodes. The only thing that stays the same is that the parent directory reuses the same name for the restored directory (because you told it to).
Now if you attempt to access the contents of the directory that your original shell is still pointing to, it communicates that request to the file system as a request for the original inode, which has since been recycled (and may even be in use for something entirely different now). So you get a stale file handle
message because you asked for some nonexistent data.
When you perform a cd
operation, the shell reevaluates the inode location of whatever destination you give it. Now that your shell knows the new inode for the directory (and the new inodes for its contents), future requests for its contents will be valid.
How about something like this
PROXY = "149.215.113.110:70"
webdriver.DesiredCapabilities.FIREFOX['proxy'] = {
"httpProxy":PROXY,
"ftpProxy":PROXY,
"sslProxy":PROXY,
"noProxy":None,
"proxyType":"MANUAL",
"class":"org.openqa.selenium.Proxy",
"autodetect":False
}
# you have to use remote, otherwise you'll have to code it yourself in python to
driver = webdriver.Remote("http://localhost:4444/wd/hub", webdriver.DesiredCapabilities.FIREFOX)
You can read more about it here.
There is no real limit -- everything is named with a 160-bit name. The size of the file must be representable in a 64 bit number so no real limit there either.
There is a practical limit, though. I have a repository that's ~8GB with >880,000 files and git gc takes a while. The working tree is rather large so operations that inspect the entire working directory take quite a while. This repo is only used for data storage, though, so it's just a bunch of automated tools that handle it. Pulling changes from the repo is much, much faster than rsyncing the same data.
%find . -type f | wc -l
791887
%time git add .
git add . 6.48s user 13.53s system 55% cpu 36.121 total
%time git status
# On branch master
nothing to commit (working directory clean)
git status 0.00s user 0.01s system 0% cpu 47.169 total
%du -sh .
29G .
%cd .git
%du -sh .
7.9G .
I see you would want to do this if you wanted to make, say, the whole box of a menu item clickable. I used to insert an 'li' tag in 'a' tags to do this but this seems more valid.
Right. The function you pass to getLocations() won't get called until the data is available, so returning "country" before it's been set isn't going to help you.
The way you need to do this is to have the function that you pass to geocoder.getLocations() actually do whatever it is you wanted done with the returned values.
Something like this:
function reverseGeocode(latitude,longitude){
var geocoder = new GClientGeocoder();
var latlng = new GLatLng(latitude, longitude);
geocoder.getLocations(latlng, function(addresses) {
var address = addresses.Placemark[0].address;
var country = addresses.Placemark[0].AddressDetails.Country.CountryName;
var countrycode = addresses.Placemark[0].AddressDetails.Country.CountryNameCode;
var locality = addresses.Placemark[0].AddressDetails.Country.AdministrativeArea.SubAdministrativeArea.Locality.LocalityName;
do_something_with_address(address, country, countrycode, locality);
});
}
function do_something_with_address(address, country, countrycode, locality) {
if (country==="USA") {
alert("USA A-OK!"); // or whatever
}
}
If you might want to do something different every time you get the location, then pass the function as an additional parameter to reverseGeocode:
function reverseGeocode(latitude,longitude, callback){
// Function contents the same as above, then
callback(address, country, countrycode, locality);
}
reverseGeocode(latitude, longitude, do_something_with_address);
If this looks a little messy, then you could take a look at something like the Deferred feature in Dojo, which makes the chaining between functions a little clearer.
SELECT COUNT(1) FROM FB
WHERE Dte > DATE_SUB(now(), INTERVAL 2 MONTH)
This one is quite simple and does not require a delegate as you say.
resultString = Regex.Replace(subjectString,
@"(?im)^[{(]?[0-9A-F]{8}[-]?(?:[0-9A-F]{4}[-]?){3}[0-9A-F]{12}[)}]?$",
"'$0'");
This matches the following styles, which are all equivalent and acceptable formats for a GUID.
ca761232ed4211cebacd00aa0057b223
CA761232-ED42-11CE-BACD-00AA0057B223
{CA761232-ED42-11CE-BACD-00AA0057B223}
(CA761232-ED42-11CE-BACD-00AA0057B223)
Update 1
@NonStatic makes the point in the comments that the above regex will match false positives which have a wrong closing delimiter.
This can be avoided by regex conditionals which are broadly supported.
Conditionals are supported by the JGsoft engine, Perl, PCRE, Python, and the .NET framework. Ruby supports them starting with version 2.0. Languages such as Delphi, PHP, and R that have regex features based on PCRE also support conditionals. (source http://www.regular-expressions.info/conditional.html)
The regex that follows Will match
{123}
(123)
123
And will not match
{123)
(123}
{123
(123
123}
123)
Regex:
^({)?(\()?\d+(?(1)})(?(2)\))$
The solutions is simplified to match only numbers to show in a more clear way what is required if needed.
Flexbox can be used to make a child wider than its parent with three lines of CSS.
Only the child’s display
, margin-left
and width
need to be set. margin-left
depends on the child’s width
. The formula is:
margin-left: calc(-.5 * var(--child-width) + 50%);
CSS variables can be used to avoid manually calculating the left margin.
Demo #1: Manual calculation
.parent {_x000D_
background-color: aqua;_x000D_
height: 50vh;_x000D_
margin-left: auto;_x000D_
margin-right: auto;_x000D_
width: 50vw;_x000D_
}_x000D_
_x000D_
.child {_x000D_
background-color: pink;_x000D_
display: flex;_x000D_
}_x000D_
_x000D_
.wide {_x000D_
margin-left: calc(-37.5vw + 50%);_x000D_
width: 75vw;_x000D_
}_x000D_
_x000D_
.full {_x000D_
margin-left: calc(-50vw + 50%);_x000D_
width: 100vw;_x000D_
}
_x000D_
<div class="parent">_x000D_
<div>_x000D_
parent_x000D_
</div>_x000D_
<div class="child wide">_x000D_
75vw_x000D_
</div>_x000D_
<div class="child full">_x000D_
100vw_x000D_
</div>_x000D_
</div>
_x000D_
Demo #2: Using CSS variables
.parent {_x000D_
background-color: aqua;_x000D_
height: 50vh;_x000D_
margin-left: auto;_x000D_
margin-right: auto;_x000D_
width: 50vw;_x000D_
}_x000D_
_x000D_
.child {_x000D_
background-color: pink;_x000D_
display: flex;_x000D_
margin-left: calc(-.5 * var(--child-width) + 50%);_x000D_
width: var(--child-width);_x000D_
}_x000D_
_x000D_
.wide {_x000D_
--child-width: 75vw;_x000D_
}_x000D_
_x000D_
.full {_x000D_
--child-width: 100vw;_x000D_
}
_x000D_
<div class="parent">_x000D_
<div>_x000D_
parent_x000D_
</div>_x000D_
<div class="child wide">_x000D_
75vw_x000D_
</div>_x000D_
<div class="child full">_x000D_
100vw_x000D_
</div>_x000D_
</div>
_x000D_
Inner join - An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common.
Left outer join -
A left outer join will give all rows in A, plus any common rows in B.
Full outer join -
A full outer join will give you the union of A and B, i.e. All the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versa.
check this
Regular expressions actually aren't part of ANSI C. It sounds like you might be talking about the POSIX regular expression library, which comes with most (all?) *nixes. Here's an example of using POSIX regexes in C (based on this):
#include <regex.h>
regex_t regex;
int reti;
char msgbuf[100];
/* Compile regular expression */
reti = regcomp(®ex, "^a[[:alnum:]]", 0);
if (reti) {
fprintf(stderr, "Could not compile regex\n");
exit(1);
}
/* Execute regular expression */
reti = regexec(®ex, "abc", 0, NULL, 0);
if (!reti) {
puts("Match");
}
else if (reti == REG_NOMATCH) {
puts("No match");
}
else {
regerror(reti, ®ex, msgbuf, sizeof(msgbuf));
fprintf(stderr, "Regex match failed: %s\n", msgbuf);
exit(1);
}
/* Free memory allocated to the pattern buffer by regcomp() */
regfree(®ex);
Alternatively, you may want to check out PCRE, a library for Perl-compatible regular expressions in C. The Perl syntax is pretty much that same syntax used in Java, Python, and a number of other languages. The POSIX syntax is the syntax used by grep
, sed
, vi
, etc.
My way to replicate Excel's MOD(a,b)
in VBA is to use XLMod(a,b)
in VBA where you include the function:
Function XLMod(a, b)
' This replicates the Excel MOD function
XLMod = a - b * Int(a / b)
End Function
in your VBA Module
Ok I found out:
You have to put /show
a after the URL you're working on:
http://jsfiddle.net/<your_fiddle_id>/show/
It is the site that shows the results.
And then when you save it as a file. It is all in one HTML-file.
For example:
http://jsfiddle.net/Ua8Cv/show/
for the site http://jsfiddle.net/Ua8Cv
I will show you 2 ways to accomplish what you want:
First way: Decorate your field with JsonProperty attribute in order to skip the serialization of that field if it is null.
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public List<Something> Somethings { get; set; }
}
Second way: If you are negotiation with some complex scenarios then you could use the Web Api convention ("ShouldSerialize") in order to skip serialization of that field depending of some specific logic.
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
public List<Something> Somethings { get; set; }
public bool ShouldSerializeSomethings() {
var resultOfSomeLogic = false;
return resultOfSomeLogic;
}
}
WebApi uses JSON.Net and it use reflection to serialization so when it has detected (for instance) the ShouldSerializeFieldX() method the field with name FieldX will not be serialized.
Simple way:
repository.findAll(Sort.by(Sort.Direction.DESC, "colName"));
use css3 flexbox with justify-content:center;
<div class="row">
<div class="col" style="background:red;">content1</div>
<div class="col" style="">content2</div>
</div>
.row {
display: flex; /* equal height of the children */
height:100px;
border:1px solid red;
width: 400px;
justify-content:center;
}
If you open /Users/{your name}/android sdks/tools/android (double click it), then click on "Android SDK Manager" menu and then "Preferences" and then you can change your proxy settings specifically for Android SDK Manager. These proxy settings also apply to "Android SDK Manager" if used within Eclipse.
The name accepted into TR1 (and the draft for the next standard) is std::unordered_map
, so if you have that available, it's probably the one you want to use.
Other than that, using it is a lot like using std::map
, with the proviso that when/if you traverse the items in an std::map
, they come out in the order specified by operator<
, but for an unordered_map, the order is generally meaningless.
Number.prototype.toLocaleString()
would have been awesome if it was provided natively by all browsers (Safari).
I checked all other answers but noone seemed to polyfill it. Here is a poc towards that, which is actually a combination of first two answers; if toLocaleString
works it uses it, if it doesn't it uses a custom function.
var putThousandsSeparators;_x000D_
_x000D_
putThousandsSeparators = function(value, sep) {_x000D_
if (sep == null) {_x000D_
sep = ',';_x000D_
}_x000D_
// check if it needs formatting_x000D_
if (value.toString() === value.toLocaleString()) {_x000D_
// split decimals_x000D_
var parts = value.toString().split('.')_x000D_
// format whole numbers_x000D_
parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, sep);_x000D_
// put them back together_x000D_
value = parts[1] ? parts.join('.') : parts[0];_x000D_
} else {_x000D_
value = value.toLocaleString();_x000D_
}_x000D_
return value;_x000D_
};_x000D_
_x000D_
alert(putThousandsSeparators(1234567.890));
_x000D_
No internal means, AFAIK. Use this: memset(c, 0, length);
OR operator:
<div ng-repeat="k in items">
<div ng-if="k || 'a' or k == 'b'">
<!-- SOME CONTENT -->
</div>
</div>
Even though it is simple enough to read, I hope as a developer you are use better names than 'a' 'k' 'b' etc..
For Example:
<div class="links-group" ng-repeat="group in groups" ng-show="!group.hidden">
<li ng-if="user.groups.admin || group.title == 'Home Pages'">
<!--Content-->
</li>
</div>
Another OR example
<p ng-if="group.title != 'Dispatcher News' or group.title != 'Coordinator News'" style="padding: 5px;">No links in group.</p>
AND operator (For those stumbling across this stackoverflow answer looking for an AND instead of OR condition)
<div class="links-group" ng-repeat="group in groups" ng-show="!group.hidden">
<li ng-if="user.groups.admin && group.title == 'Home Pages'">
<!--Content-->
</li>
</div>
Thought I'd share this code. It's fairly simple, but effective. It parses comma-separated files with parenthesis. You can easily modify it to suit your needs.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[])
{
//argv[1] path to csv file
//argv[2] number of lines to skip
//argv[3] length of longest value (in characters)
FILE *pfinput;
unsigned int nSkipLines, currentLine, lenLongestValue;
char *pTempValHolder;
int c;
unsigned int vcpm; //value character marker
int QuotationOnOff; //0 - off, 1 - on
nSkipLines = atoi(argv[2]);
lenLongestValue = atoi(argv[3]);
pTempValHolder = (char*)malloc(lenLongestValue);
if( pfinput = fopen(argv[1],"r") ) {
rewind(pfinput);
currentLine = 1;
vcpm = 0;
QuotationOnOff = 0;
//currentLine > nSkipLines condition skips ignores first argv[2] lines
while( (c = fgetc(pfinput)) != EOF)
{
switch(c)
{
case ',':
if(!QuotationOnOff && currentLine > nSkipLines)
{
pTempValHolder[vcpm] = '\0';
printf("%s,",pTempValHolder);
vcpm = 0;
}
break;
case '\n':
if(currentLine > nSkipLines)
{
pTempValHolder[vcpm] = '\0';
printf("%s\n",pTempValHolder);
vcpm = 0;
}
currentLine++;
break;
case '\"':
if(currentLine > nSkipLines)
{
if(!QuotationOnOff) {
QuotationOnOff = 1;
pTempValHolder[vcpm] = c;
vcpm++;
} else {
QuotationOnOff = 0;
pTempValHolder[vcpm] = c;
vcpm++;
}
}
break;
default:
if(currentLine > nSkipLines)
{
pTempValHolder[vcpm] = c;
vcpm++;
}
break;
}
}
fclose(pfinput);
free(pTempValHolder);
}
return 0;
}
You have to be specific about the columns you are selecting. If your user
table had four columns id, name, username, opted_in
you must select exactly those four columns from the query. The syntax looks like:
INSERT INTO user (id, name, username, opted_in)
SELECT id, name, username, opted_in
FROM user LEFT JOIN user_permission AS userPerm ON user.id = userPerm.user_id
However, there does not appear to be any reason to join against user_permission
here, since none of the columns from that table would be inserted into user
. In fact, this INSERT
seems bound to fail with primary key uniqueness violations.
MySQL does not support inserts into multiple tables at the same time. You either need to perform two INSERT
statements in your code, using the last insert id from the first query, or create an AFTER INSERT
trigger on the primary table.
INSERT INTO user (name, username, email, opted_in) VALUES ('a','b','c',0);
/* Gets the id of the new row and inserts into the other table */
INSERT INTO user_permission (user_id, permission_id) VALUES (LAST_INSERT_ID(), 4)
Or using a trigger:
CREATE TRIGGER creat_perms AFTER INSERT ON `user`
FOR EACH ROW
BEGIN
INSERT INTO user_permission (user_id, permission_id) VALUES (NEW.id, 4)
END
The correct syntax is:
FOR EACH ROW SET NEW.bname = CONCAT( UCASE( LEFT( NEW.bname, 1 ) )
, LCASE( SUBSTRING( NEW.bname, 2 ) ) )
To use Collections sort(List,Comparator) , you need to create a class that implements Comparator Interface, and code for the compare() in it, through Comparator Interface
You can do something like this:
class StudentComparator implements Comparator
{
public int compare (Student s1 Student s2)
{
// code to compare 2 students
}
}
To sort do this:
Collections.sort(List,new StudentComparator())
This question is already answered here
The classpath never includes specific files. It includes directories and jar files. So, put that file in a directory that is in your classpath
.
Log4j
properties aren't (normally) used in developing apps (unless you're debugging Eclipse itself!). So what you really want to to build the executable Java app (Application, WAR, EAR or whatever) and include the Log4j
properties in the runtime classpath.
I found solution. It works fine when I throw away next line from form:
enctype="multipart/form-data"
And now it pass all parameters at request ok:
<form action="/registration" method="post">
<%-- error messages --%>
<div class="form-group">
<c:forEach items="${registrationErrors}" var="error">
<p class="error">${error}</p>
</c:forEach>
</div>
You can easily create a persistent session using:
s = requests.Session()
After that, continue with your requests as you would:
s.post('https://localhost/login.py', login_data)
#logged in! cookies saved for future requests.
r2 = s.get('https://localhost/profile_data.json', ...)
#cookies sent automatically!
#do whatever, s will keep your cookies intact :)
For more about sessions: https://requests.kennethreitz.org/en/master/user/advanced/#session-objects